The integration of gaze gesture sensors in next-generation smart glasses will improve usability and enable new interaction concepts. However, consumer smart glasses place additional requirements to gaze gesture sensors, such as a low power consumption, high integration capability and robustness to ambient illumination. We propose a novel gaze gesture sensor based on laser feedback interferometry (LFI), which is capable to measure the rotational velocity of the eye as well as the sensors distance towards the eye. This sensor delivers a unique and novel set of features with an outstanding sample rate allowing to not only predict a gaze gesture but also to anticipate it. To take full advantage of the unique sensor features and the high sampling rate, we propose additionally a novel gaze gesture classification algorithm based on single sample. At a mean F1-score of 93.44 performance at a negative latency between gaze gesture input and command execution.
We present a new loss function for the validation of image landmarks detected via Convolutional Neural Networks (CNN). The network learns to estimate how accurate its landmark estimation is. This loss function is applicable to all regression-based location estimations and allows the exclusion of unreliable landmarks from further processing. In addition, we formulate a novel batch balancing approach which weights the importance of samples based on their produced loss. This is done by computing a probability distribution mapping on an interval from which samples can be selected using a uniform random selection scheme. We conducted experiments on the 300W, AFLW, and WFLW facial landmark datasets. In the first experiments, the influence of our batch balancing approach is evaluated by comparing it against uniform sampling. In addition, we evaluated the impact of the validation loss on the landmark accuracy based on uniform sampling. The last experiments evaluate the correlation of the validation signal with the landmark accuracy. All experiments were performed for all three datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.