15 November 2018 Dilated residual encode–decode networks for image denoising
Shengyu Li, Xuesong Liu, Rongxin Jiang, Fan Zhou, Yaowu Chen
Author Affiliations +
Abstract
Owing to recent advancements, very deep convolutional neural networks (CNNs) have found application in image denoising. However, while deeper models lead to better restoration performance, they are marred by a high number of parameters and increased training difficulty. To address these issues, we propose a CNN-based framework, named dilated residual encode–decode networks (DRED-Net), for image denoising, which learns direct end-to-end mappings from corrupted images to obtain clean images using few parameters. Our proposed network consists of multiple layers of convolution and deconvolution operators; in addition, we use dilated convolutions to boost the performance of our network without increasing the depth of the model or its complexity. Extensive experiments on synthetic noisy images are conducted to evaluate DRED-Net, and the results are compared with those obtained using state-of-the-art denoising methods. Our experimental results show that DRED-Net leads to results comparable with those obtained using other state-of-the-art methods for image denoising tasks.
© 2018 SPIE and IS&T 1017-9909/2018/$25.00 © 2018 SPIE and IS&T
Shengyu Li, Xuesong Liu, Rongxin Jiang, Fan Zhou, and Yaowu Chen "Dilated residual encode–decode networks for image denoising," Journal of Electronic Imaging 27(6), 063005 (15 November 2018). https://doi.org/10.1117/1.JEI.27.6.063005
Received: 4 June 2018; Accepted: 23 October 2018; Published: 15 November 2018
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Convolution

Image denoising

Denoising

Performance modeling

Data modeling

Deconvolution

Network architectures

Back to Top