Please use this identifier to cite or link to this item:
https://www.um.edu.mt/library/oar/handle/123456789/11408
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.date.accessioned | 2016-07-12T10:30:55Z | - |
dc.date.available | 2016-07-12T10:30:55Z | - |
dc.date.issued | 2015 | - |
dc.identifier.uri | https://www.um.edu.mt/library/oar//handle/123456789/11408 | - |
dc.description | B.SC.IT(HONS) | en_GB |
dc.description.abstract | A face image can tell a lot about a person, such as the gender, mood, ethnicity, age and the identity of that person. The aim of this dissertation is to produce a face recognizer that outputs the identity of a person and an age estimator that outputs an accurate age from a facial image. Age estimation and facial recognition can be applied to various real-life situations, including their use in authorization systems or using the age estimation on cigarette vending machines. The implementation of the age estimator is based on Extended Bio-Inspired Features which makes use of the HMAX model (a feedforward model of the visual object recognition pathway consisting of alternate Simple Cell layers and Complex Cell Layers). The age estimator takes as input a single face image and outputs a predicted age. The algorithm is split into five main stages: (1) Detecting the face in the input image; (2) Cropping the background from the face image; (3) Extracting features from the cropped face; (4) Reducing the dimensions of the feature vectors to a lower dimensionality and (5) training the learning machines with the reduced feature vectors and corresponding labels. After the machines have been trained on a dataset containing different people of ranging ages under varying lighting conditions and poses, the age estimator can predict the ages of previously unseen subjects in real-life situations. Detecting the face in an image is done using Haar-like features and the face image is cropped by applying an Active Shape Model (ASM) on the detected face. Features are extracted from the face image using a family of Gabor filters at different scales and orientations. These features are then reduced by applying the standard deviation (STD) operation to capture the main variation in the data reducing the dimensions of the feature vectors from over 300 thousand to an approximate of only 6000 features. Using these feature vectors and corresponding labels, Support Vector Machines (SVMs) and Support Vector Regressions (SVRs) are used for learning. After applying different parameters to the SVRs and SVMs using this methodology the algorithm achieved a Mean Absolute Error (MAE) of seven years for ages ranging from 0 to 69 years old. | en_GB |
dc.language.iso | en | en_GB |
dc.rights | info:eu-repo/semantics/restrictedAccess | en_GB |
dc.subject | Human face recognition (Computer science) | en_GB |
dc.subject | Computer vision | en_GB |
dc.subject | Image processing | en_GB |
dc.title | Face recognition and age estimation application in order to prevent fake identities and crimes | en_GB |
dc.type | bachelorThesis | en_GB |
dc.rights.holder | The copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder. | en_GB |
dc.publisher.institution | University of Malta | en_GB |
dc.publisher.department | Faculty of Information and Communication Technology | en_GB |
dc.description.reviewed | N/A | en_GB |
dc.contributor.creator | Cachia, Gabriel | - |
Appears in Collections: | Dissertations - FacICT - 2015 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
15BSCIT009.pdf Restricted Access | 2.43 MB | Adobe PDF | View/Open Request a copy |
Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.