Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/13717
Title: Eye-gaze tracking by video-based joint head and eye pose estimation
Authors: Cristina, Stefania
Keywords: Eye tracking
Tracking (Engineering)
Algorithms
Issue Date: 2016
Abstract: Human eye-gaze tracking has been receiving increasing interest over the years. Recent advancements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking eye movements in unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. This led to an emerging demand for eye-gaze tracking methods that handle extensive head rotations, reduce user calibration effort hence allowing for situations that do not permit prolonged user co-operation, and estimate the gaze from low-quality images captured by generic imaging hardware, among others. In light of these demands, we have aligned our research objectives and contributions with several of the main challenges associated with this area of interest. We first propose a method for the estimation of on-screen point-of-regard allowing slight head movement which, under the assumption of small-angles, linearly projects the image displacements of specific eye and head features onto the screen. A brief calibration procedure that suffices to obtain this image-to-screen mapping is also designed. Next, using a model-based head pose estimation technique, a Spherical Eye-in-head Rotation (SphERo) model is proposed which allows for the non-linearities of the eye and head movements to be modelled more accurately, hence permitting gaze estimation under larger head movement. The model parameter values are estimated from image information and anthropometric measurements of the human eye, further reducing the calibration effort. Next, we introduce a model-free head pose estimation method based on the trajectories of salient feature points spread randomly over the face region and further extend this to handle non-rigid face deformations. Our methods do not necessitate prior training or shape information as typically required by the state-of-the-art, and address several of the prevailing issues associated with model-fitting techniques for head pose estimation. Finally, the model-free head pose estimation method is combined with the SphERo model to provide combined gaze estimation by joint head and eye pose estimation allowing for non-rigid face deformations. Quantitative evaluation of the methods has been carried out at every stage, where in summary the on-screen method yields a mean point-of-regard error of (22.64, 17.81) pixels, within the footprint of a standard icon, and the model-free SphERo method gives a mean gaze error of (6.01!, 5.14!) in yaw and pitch under considerable head rotations.
Description: PH.D.ENGINEERING
URI: https://www.um.edu.mt/library/oar//handle/123456789/13717
Appears in Collections:Dissertations - FacEng - 2016

Files in This Item:
File Description SizeFormat 
16PHDENGR005.pdf
  Restricted Access
72.49 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.