Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/24015
Title: Model and dictionary guided face inpainting in the wild
Authors: Farrugia, Reuben A.
Guillemot, Christine
Keywords: Human face recognition (Computer science)
Image reconstruction
Video recording
Video surveillance
Issue Date: 2016
Publisher: Springer
Citation: Farrugia, R. A., & Guillemot, C. (2016). Model and dictionary guided face inpainting in the wild. Asian Conference on Computer Vision (ACCV), Taipei.
Abstract: This work presents a method that can be used to inpaint occluded facial regions with unconstrained pose and orientation. This approach first warps the facial region onto a reference model to synthesize a frontal view. A modified Robust Principal Component Analysis (RPCA) approach is then used to suppress warping errors. It then uses a novel local patch-based face inpainting algorithm which hallucinates missing pixels using a dictionary of face images which are pre-aligned to the same reference model. The hallucinated region is then warped back onto the original image to restore missing pixels. Experimental results on synthetic occlusions demonstrate that the proposed face inpainting method has the best performance achieving PSNR gains of up to 0.74 dB over the second-best method. Moreover, experiments on the COFW dataset and a number of real-world images show that the proposed method successfully restores occluded facial regions in the wild even for CCTV quality images.
URI: https://www.um.edu.mt/library/oar//handle/123456789/24015
Appears in Collections:Scholarly Works - FacICTCCE

Files in This Item:
File Description SizeFormat 
RAW10-01.pdf
  Restricted Access
7.04 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.