Limited-view photoacoustic tomography images will have a lot of artifacts and information loss when using conventional photoacoustic tomography image reconstruction algorithms. To solve this problem, this paper proposed a limited-view photoacoustic tomography reconstruction method based on a generative model. The network is trained through the noise-perturbed method and can learn a kind of scoring functions (gradients of logarithmic probability density functions) of the training dataset. The trained network has the capability to generate samples that conform to the distribution of the training dataset. Blood vessels simulation data were used to evaluate the performance of the proposed method. Experimental results on simulated blood vessels show that, compared with traditional reconstruction methods, the proposed method can effectively remove artifacts and improve image quality with measured data collected from 90°, 120°, and 180°.
As an important three-dimensional (3D) display technology, holographic 3D display has great application prospects in virtual and augmented reality applications. However, it has been challenging to generate 3D hologram rapidly with high‑reconstruction quality. Here, we proposed a high-speed 3D hologram generation method via convolutional neural network (CNN). The CNN network is trained by unsupervised training, and the trained CNN can generate 3D hologram with 1024×1024 resolution at 100 planes within 60 ms. The feasibility and effectiveness of the proposed method have been demonstrated by simulation. This method will further expand the application of holographic 3D display in remote education, medical treatment, entertainment, and other fields.
Sparse reconstruction in photoacoustic tomography has always faced the problem of artifacts. To address this issue, a diffusion model-based method for sparse data reconstruction in photoacoustic tomography was proposed. During the training phase, the gradient of the probability density of the image was learned as the data prior by adding noise and denoising at each step. During the testing phase, ultrasonic signals are generated by illuminating with pulsed laser and acquired by ultrasonic transducers surrounding the object, which was implemented using the k-Wave toolbox. The reconstructed image was finally obtained by reserve-time Stochastic Differential Equation (SDE). Experimental results on vascular data show that the proposed algorithm can effectively remove artifacts and improve image quality compared with conventional reconstruction methods under 32 and 64 detectors, respectively.
Artifact in photoacoustic tomography is always an issue to be solved. Here, a deep learning based physical model method to remove artifact for limited-data photoacoustic tomography was proposed, termed as PD net. A virtual photoacoustic tomography platform was constructed based on k-Wave, and the dataset required for deep learning was obtained using this virtual platform. The U-Net was used to build a deep learning network to remove artifacts in sparse-view and limited-view photoacoustic tomography. Under sparsity condition, when the number of ultrasonic transducers is 64, the improvement rates of SSIM and PSNR of the network are 274% and 66.34%, respectively, compared with the input of the network, which verifies that this method can remove artifacts in sparse-view photoacoustic tomography. The proposed method can reduce artifacts and enhance anatomical contrast when the number of ultrasonic transducers used is limited, and effectively reduce manufacturing costs of photoacoustic tomography.
In the traditional Fourier single-pixel imaging (FSPI), compressed sampling is often used to improve the acquisition speed. However, the reconstructed image after compressed sampling often has a lower resolution and the quality is difficult to meet the imaging requirements of practical applications. To address this issue, we proposed a novel imaging method that combines deep learning and single-pixel imaging, which can reconstruct high-resolution images with only a small-scale sampling. In the training phase of the network, we attempted to incorporate the physical process of FSPI into the training process. To achieve this objective, a large number of natural images were selected to simulate Fourier single-pixel compressed sampling and reconstruction. The compressed reconstructed samples were subsequently employed for network training. In the testing phase of the network, the compressed reconstruction samples of the test dataset were input into the network for optimization. The experimental results showed that compared with traditional compressed reconstruction methods, this method effectively improved the quality of reconstructed images.
KEYWORDS: 3D acquisition, 3D image processing, 3D displays, Holography, Holograms, Diffraction, Optical scanning systems, Frequency modulation, Fermium, 3D image reconstruction
In recent years, three-dimensional (3D) display technology has developed rapidly, and it is widely used in education, medical, military and other fields. 3D holographic display is regarded as the ultimate solution of 3D display. However, the lack of 3D content is one of the challenges that has been faced by 3D holographic display. The traditional method uses light-field camera and RGB-D camera to obtain 3D information of real scene, which has the problems of high-system complexity and long-time consumption. Here, we proposed a 3D scene acquisition and reconstruction system based on optical axial scanning. First an electrically tunable lens (ETL) was used for high-speed focus shift (up to 2.5 ms). A CCD camera was synchronized with the ETL to acquire multi-focused image sequence of real scene. Then, Tenengrad operator was used to obtain the focusing area of each multi-focused image, and the 3D image were obtained. Finally, the Computer-generated Hologram (CGH) can be obtained by the layer-based diffraction algorithm. The CGH was loaded onto the space light modulator to reconstruct the 3D holographic image. The experimental results verify the feasibility of the system. This method will expand the application of 3D holographic display in the field of education, advertising, entertainment, and other fields.
Defocus blur in images is often the result of inadequate camera settings or depth of field restrictions. In recent years, with the emergence and advancement of deep learning, learning representation-based methods have achieved remarkable success in the field of image defocus enhancement. In this paper, a rapid axial scanning system was proposed for efficient acquisition of defocused-enhancement datasets. A multi-focus image sequence with different focus depths of a same scene is captured, and it is utilized to generate a full-focus image (ground truth) through image fusion, to build a set of defocused enhancement datasets. Multiple defocused-enhancement datasets can be obtained based on this approach. Experimental results confirm the feasibility and effectiveness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.