KEYWORDS: Deep learning, Video, RGB color model, Visualization, Systems modeling, Cameras, Video processing, Statistical modeling, Sensors, Pose estimation
First time spectators of fencing competitions cannot understand the complicated rules, making it difficult for them to enjoy the game. Therefore, in this paper, we propose a system that detects the situation of a fencing match using skeleton points extracted from videos. Players cannot be equipped with sensors or other devices to prevent interference with the match. Consequently, this research proposes a system that detects "phrases" using skeleton point information extracted from videos and displays the game situation. We evaluate actual videos of fencing to confirm the performance.
This paper proposes a polynomial-fitting based calibration method for an active 3D sensing using a dynamic light section method. In the dynamic light section method, the relative position of the line laser is dynamically changed at high speed to extend the measurement area with a low computational cost. To conduct 3D sensing, it is necessary to find the equation of the laser plane. In the proposed calibration method, a part of the line laser is irradiated on a reference plane fixed in the 3D sensing system, and correspondences between the normal vectors of the line laser and the image coordinates of the bright point on the reference plane are obtained. Then, the correspondences are regressed to a polynomial function. As a result, the plane equation of the line laser can be obtained at any given moment without considering the complicated system model. Through the measurement accuracy evaluation of the dynamic light section method calibrated by the polynomial fitting, we showed that a measurement target at a distance of 800 mm can be measured with an accuracy of an average of -5.94 mm and a standard deviation of 13.19 mm by rotating the line laser at 210 rpm.
With the advantage of having a large field of view, fisheye cameras are widely used in many applications. In order to generate a precise view, calibration of the fisheye cameras is very important. In this paper, we propose a method of extrinsic parameters calibration of multiple fisheye cameras working in man-made structures. A Manhattan Worlds space assumption is used, which describes man-made structures as sets of planes that are either orthogonal or parallel to each other. The orientation of the cameras is obtained by extracting vanishing points that denote orthogonal principal directions in different images captured by the cameras at the same time. With the proposed method, the calibration of extrinsic parameters is very convenient and the system can be recalibrated remotely.
In this research, we propose a novel distortion-resistant visual odometry technique using a spherical camera, in order to provide localization for a UAV-based, bridge inspection support system. We take into account the distortion of the pixels during the calculation of the 2-frame essential matrix via feature-point correspondences. Then, we triangulate 3D points and use them for 3D registration of further frames in the sequence via a modified spherical error function. Via experiments conducted on a real bridge pillar, we demonstrate that the proposed approach greatly increases the accuracy of localization, resulting in an 8.6 times lower localization error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.