PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Depth (disparity) estimation from 4D Light Field (LF) images has been a research topic for the last couple of years. Most studies have focused on depth estimation from static 4D LF images while not considering temporal information, i.e., LF videos. This paper proposes an end-to-end neural network architecture for depth estimation from 4D LF videos. This study also constructs a medium-scale synthetic 4D LF video dataset that can be used for training deep learning-based methods. Experimental results using synthetic and real-world 4D LF videos show that temporal information contributes to the improvement of depth estimation accuracy in noisy regions. Our dataset and source codes are available at: https://mediaeng-lfv.github.io/LFV_Disparity_Estimation/.
Takahiro Kinoshita andSatoshi Ono
"Depth estimation from 4D light field videos", Proc. SPIE 11766, International Workshop on Advanced Imaging Technology (IWAIT) 2021, 117660A (13 March 2021); https://doi.org/10.1117/12.2591012
ACCESS THE FULL ARTICLE
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Takahiro Kinoshita, Satoshi Ono, "Depth estimation from 4D light field videos," Proc. SPIE 11766, International Workshop on Advanced Imaging Technology (IWAIT) 2021, 117660A (13 March 2021); https://doi.org/10.1117/12.2591012