Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/77708
Full metadata record
DC FieldValueLanguage
dc.date.accessioned2021-06-25T09:17:27Z-
dc.date.available2021-06-25T09:17:27Z-
dc.date.issued2015-
dc.identifier.citationAbela Scicluna, M. (2015). A study of automated feature extraction and classification for music emotion recognition (Master’s dissertation).en_GB
dc.identifier.urihttps://www.um.edu.mt/library/oar/handle/123456789/77708-
dc.descriptionM.SC.ICT COMMS&COMPUTER ENG.en_GB
dc.description.abstractResearch in Music Information Retrieval has been approached by considering the raw acoustic signal or the human annotated textual metadata. These approaches have resulted in a semantic gap between computational models and the understanding of music from a purely human perspective. This dissertation is an attempt to highlight points of focus that can con tribute to the narrowing of the semantic gap. It studies the relationship between the listeners' understanding of music and the underlying musical concepts. The study considers music emotion as the higher level concept for classification. Approaches to music emotion recognition through the ex traction of perceptual features are compared. A dataset was built to carry out the comparative analysis. Human-rated features and emotion descriptors are obtained through a subjective test, while computational features are extracted using publicly available tools. The accurate representation of the perceptual musical concepts by the computational features is tested using regressive modelling. Accuracy is checked by comparing the performance of the modelled and extracted features in music emotion recognition. This dissertation finds that the computational features still do not singularly represent the respective perceptual concept. Subjectivity of emotions is found to be mainly located in the valence dimension, particularly for calmer music, where classification accuracy is as low as 12%. The accuracy in classifying the energetic mood component exceeds 90%, since its main predictors, Speed and Dynamics are the most accurately modelled features, with correlation coefficients of 0.89 and 0.93 respectively. Hence, for improving Music Emotion Recognition, focus needs to be directed to the identification of computational features that can accurately model the perceptual predictors for valence. This dissertation concludes that computational perceptual features have the potential of narrowing the semantic gap and thus simplifying tasks in Music Information Retrieval.en_GB
dc.language.isoenen_GB
dc.rightsinfo:eu-repo/semantics/restrictedAccessen_GB
dc.subjectMusic and technologyen_GB
dc.subjectSemantic computingen_GB
dc.subjectEmotions in musicen_GB
dc.titleA study of automated feature extraction and classification for music emotion recognitionen_GB
dc.typemasterThesisen_GB
dc.rights.holderThe copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder.en_GB
dc.publisher.institutionUniversity of Maltaen_GB
dc.publisher.departmentFaculty of Information and Communication Technology. Department of Communications and Computer Engineeringen_GB
dc.description.reviewedN/Aen_GB
dc.contributor.creatorAbela Scicluna, Maria (2015)-
Appears in Collections:Dissertations - FacICT - 2015
Dissertations - FacICTCCE - 2015

Files in This Item:
File Description SizeFormat 
M.SC.COMM._COMPUTER ENG._Abela Scicluna_Maria_2015.pdf
  Restricted Access
8.15 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.