In this paper, we propose a depth estimation framework for light field camera arrays. The goal of the proposed framework is to compute consistent depth information over the multiple cameras which is hardly achieved by conventional approaches based on the pairwise stereo matching. We first perform stereo matchings on adjacent image pairs using a convolutional neural network-based correspondence scoring model. Once the local disparity maps are estimated, we consolidate the disparity values to make them globally sharable over the multiple views. We finally refine the depth values in the image domain by introducing a novel image segmentation method considering edges in the image to obtain a semantic-aware global depth map. The proposed framework is evaluated on three different real world scenarios, and the experimental results validate that our proposed method produces accurate and consistent depth maps for images captured by the light field camera arrays.
Object tracking is a core technique in many computer vision applications. The problem becomes especially challenging when the target object is fully or even partially occluded. A recent work has shown the feasibility of utilizing plenoptic imaging techniques to resolve such occlusion problems. Specifically, it constructs focal stacks from plenoptic image sequences and selects an optimal image sequences from the stacks that can maximize the tracking accuracy. Even though the technique has proven the merit of using plenoptic images in the object tracking, there is still room for improvement. In this paper, we propose two simple but effective algorithms to improve both accuracy and robustness of object tracking based on plenoptic images. We first propose to use an image sharpening technique to reduce the blur that the refocused images inheritably have. The image sharpening makes the shape of objects more distinct, and thus a higher accuracy in the object tracking can be achieved. We also propose an adaptive bounding box proposal algorithm to overcome difficult cases where the size of the target object in the image space drastically changes. This improves the robustness in the object tracking compared to prior techniques which assumed fixed sized objects. We validate our proposed algorithms on two different scenarios, and the experimental results confirm the benefit of our method.