Volume reconstruction and pose retrieval of an arbitrary rigid object
from monocular video sequences is addressed. Initially, the object
pose is estimated in each image by locating similar textures, assuming
a flat depth map. Then shape-from-silhouette is used to make a volume
(3-D model). This volume is used in a new round of pose estimations,
this time by a model-based method that gives better estimates. Before
repeating this process by building a new volume, pose estimates are
adjusted to reduce error by maximizing a novel quality measure for
shape-from-silhouette volume reconstruction. The feedback loop is
terminated when pose estimates do not change much, as compared to
those produced by the previous iteration. Based on the theoretical
study of the proposed system, a test of convergence to a given set of
poses is devised. Reliable performance of the system is also proved by
several experiments. No model is assumed for the object. Feature
points are neither detected nor tracked, as there is no problematic
feature matching or correspondence. Our method can be also applied to
3-D object tracking in video.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.