Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/94655
Full metadata record
DC FieldValueLanguage
dc.date.accessioned2022-04-29T07:20:27Z-
dc.date.available2022-04-29T07:20:27Z-
dc.date.issued2021-
dc.identifier.citationSeychell, D. (2021). An efficient saliency driven approach for image manipulation (Doctoral dissertation).en_GB
dc.identifier.urihttps://www.um.edu.mt/library/oar/handle/123456789/94655-
dc.descriptionPh.D.(Melit.)en_GB
dc.description.abstractWith the increasing availability of low-cost high quality cameras, embedded vision systems, advanced computer vision algorithms, and proliferating solutions based on image/video data, the volume of visual content that is being captured, stored and transmitted is on the rise. Moreover, the image capturing hardware on mobile devices is also improving with a wide range of devices also housing a multiview camera setup. When combined with today’s user experience expectations, this poses a challenge to the editing process from which users expect more efficient results in the most automated possible way. Image editing is a multistage process that spans from the choice of the object or target region in the image for editing to the actual manipulation. We introduce a novel saliency-driven image content ranking approach that allows for automatic selection of objects without the need of training the model. Regions in an image can be selected according to the desired rank. This approach was compared with human behaviour when choosing the most salient object in an image within experiments that involved 2254 participants. The results obtained by the algorithm matched the behaviour of 91% of the human participants. The technique also scored a Fβ measure of 0.84 on the MSRA10k dataset and compares to normal saliency detection models that, unlike this technique, do not rank saliency. We also demonstrate how our saliency ranking model can be combined with segmentation techniques. The combined result of our saliency-driven ranking approach of segmentation masks compared well with the current deep learning state of the art methods that rank segmented objects. Once an object is selected for editing, users expect an efficient way to accurately manipulate images. This fundamental stage is explored in our work where we demonstrate the importance of object inpainting. The main challenge of image inpainting is its objective evaluation and this work presents a new structured approach to objectively evaluate inpainting algorithms. User studies with 2254 participants demonstrated that, on average, users take 3.67s to choose an object for editing in a screen. The combined saliency-driven image manipulation framework takes advantage of this physical limitation and efficiently pipelines processes to deliver an accurate and efficient result in image manipulation tasks such as attention re-targetting. A multi-purpose dataset was designed and built to serve all these functions and is also presented in this work.en_GB
dc.language.isoenen_GB
dc.rightsinfo:eu-repo/semantics/openAccessen_GB
dc.subjectImage segmentationen_GB
dc.subjectNeural networks (Computer science)en_GB
dc.subjectData setsen_GB
dc.subjectAlgorithmsen_GB
dc.subjectComputer visionen_GB
dc.subjectImage processing -- Digital techniquesen_GB
dc.titleAn efficient saliency driven approach for image manipulationen_GB
dc.typedoctoralThesisen_GB
dc.rights.holderThe copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder.en_GB
dc.publisher.institutionUniversity of Maltaen_GB
dc.publisher.departmentFaculty of ICT. Department of Communications and Computer Engineeringen_GB
dc.description.reviewedN/Aen_GB
dc.contributor.creatorSeychell, Dylan (2021)-
Appears in Collections:Dissertations - FacICT - 2021
Dissertations - FacICTCCE - 2021

Files in This Item:
File Description SizeFormat 
21PHDIT002.pdf26.42 MBAdobe PDFView/Open


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.