Accuracy Assessment of ARKit 2 Based Gaze Estimation
Published in International Conference on Human-Computer Interaction, 2020
Greinacher, R. & Voigt-Antons, J.-N.
With the growing amount of mobile application usage, assuring a high quality of experience became more and more important. Besides traditional subjective methods to test and prototype new developments, eye tracking is a prominent tool to assess quality and UX of a software product. Although portable eye trackers exist, the technology is still mostly associated with expensive laboratory equipment. To change that and to run quick and cheap eye-tracking studies in the field, attempts have been made to turn everyday hardware like smartphone cameras and webcams into eye trackers. This study explores the possibility of using a standard library of iOS to tackle the vast technical complexity usually coming with such approaches. The accuracy of an eye-tracking system purely based on the ARKit APIs of iOS is evaluated in two user studies (N = 9 & N = 8). The results indicate that an ARKit based gaze tracker provides comparable performance in terms of accuracy (, or 1.44 cm on screen), while at the same time, it uses far fewer hardware resources and provides a higher sample-rate than any other smartphone eye tracker. Especially the easy to use API is the main advantage over the technical complex systems which rely on their own image analysis for gaze estimation. Privacy implications are discussed.
Recommended citation: Greinacher, R. & Voigt-Antons, J.-N. (2020, July). Accuracy Assessment of ARKit 2Based Gaze Estimation. Paper presented at the International Conference on Human-Computer Interaction (pp. 1–10). Springer, Cham. https://doi.org/10.1007/978-3-030-49059-1_32