Please use this identifier to cite or link to this item:
https://www.um.edu.mt/library/oar/handle/123456789/11365
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.date.accessioned | 2016-07-11T10:40:41Z | - |
dc.date.available | 2016-07-11T10:40:41Z | - |
dc.date.issued | 2015 | - |
dc.identifier.uri | https://www.um.edu.mt/library/oar//handle/123456789/11365 | - |
dc.description | B.SC.IT(HONS) | en_GB |
dc.description.abstract | Emotion is a significant part of every human being. The field of affective computing integrates emotion with machine, so as to provide assistance and improve the user experience. Through emotion recognition, which elicits facial features to discern a particular emotional state, new ways are being conceived on how the human-computer interaction can be performed. Emotion also comes in play in the field of Music Information Retrieval, in which music is segregated based on the emotion it conveys. With the spread of digital music, new usable ways how to present music to users are required that enable for a better and fulfilling music-listening experience. Furthermore, through a deeper analysis of the meta-data, better recognitions are made as well as more related recommendations are given. This report provides a literature survey on emotion recognition and music classification, showing different techniques used for these two fields of study. The aim of this project is to supply a service which combines these fields, with the intent of providing the involved users the functionality of listening music based on their current mood. Through extraction of facial traits using a webcam, such traits are fed into a learning algorithm which elicits the most probable emotion shown according to the data models created beforehand. This emotion will in turn determine what type of music to be played. The latter is based on a database of labelled music that went through a supervised learning algorithm. Moreover, a feedback system will allow users to evaluate songs which will improve the system and in turn provide better recommendations. In this study, the emotion recognition mechanism reached an optimal level, allowing for a positive amount of classifications. Considering the use of multiple processes for the final classification, the computation time was suitable especially in an online setting. Music classification using different learning algorithms gave satisfactory outcomes, demonstrating an effective approach towards categorizing music into multiple emotion categories. These results, coupled with further research, will form a promising basis for any future work. | en_GB |
dc.language.iso | en | en_GB |
dc.rights | info:eu-repo/semantics/restrictedAccess | en_GB |
dc.subject | Human-computer interaction | en_GB |
dc.subject | Information storage and retrieval systems -- Music | en_GB |
dc.subject | Music -- Data processing | en_GB |
dc.subject | Emotions -- Computer simulation | en_GB |
dc.title | Emotion-based music player | en_GB |
dc.type | bachelorThesis | en_GB |
dc.rights.holder | The copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder | en_GB |
dc.publisher.institution | University of Malta | en_GB |
dc.publisher.department | Faculty of Information and Communication Technology | en_GB |
dc.description.reviewed | N/A | en_GB |
dc.contributor.creator | Portelli, Christopher | - |
Appears in Collections: | Dissertations - FacICT - 2015 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
15BSCIT023.pdf Restricted Access | 2.04 MB | Adobe PDF | View/Open Request a copy |
Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.