Please use this identifier to cite or link to this item:
https://www.um.edu.mt/library/oar/handle/123456789/77708
Title: | A study of automated feature extraction and classification for music emotion recognition |
Authors: | Abela Scicluna, Maria (2015) |
Keywords: | Music and technology Semantic computing Emotions in music |
Issue Date: | 2015 |
Citation: | Abela Scicluna, M. (2015). A study of automated feature extraction and classification for music emotion recognition (Master’s dissertation). |
Abstract: | Research in Music Information Retrieval has been approached by considering the raw acoustic signal or the human annotated textual metadata. These approaches have resulted in a semantic gap between computational models and the understanding of music from a purely human perspective. This dissertation is an attempt to highlight points of focus that can con tribute to the narrowing of the semantic gap. It studies the relationship between the listeners' understanding of music and the underlying musical concepts. The study considers music emotion as the higher level concept for classification. Approaches to music emotion recognition through the ex traction of perceptual features are compared. A dataset was built to carry out the comparative analysis. Human-rated features and emotion descriptors are obtained through a subjective test, while computational features are extracted using publicly available tools. The accurate representation of the perceptual musical concepts by the computational features is tested using regressive modelling. Accuracy is checked by comparing the performance of the modelled and extracted features in music emotion recognition. This dissertation finds that the computational features still do not singularly represent the respective perceptual concept. Subjectivity of emotions is found to be mainly located in the valence dimension, particularly for calmer music, where classification accuracy is as low as 12%. The accuracy in classifying the energetic mood component exceeds 90%, since its main predictors, Speed and Dynamics are the most accurately modelled features, with correlation coefficients of 0.89 and 0.93 respectively. Hence, for improving Music Emotion Recognition, focus needs to be directed to the identification of computational features that can accurately model the perceptual predictors for valence. This dissertation concludes that computational perceptual features have the potential of narrowing the semantic gap and thus simplifying tasks in Music Information Retrieval. |
Description: | M.SC.ICT COMMS&COMPUTER ENG. |
URI: | https://www.um.edu.mt/library/oar/handle/123456789/77708 |
Appears in Collections: | Dissertations - FacICT - 2015 Dissertations - FacICTCCE - 2015 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
M.SC.COMM._COMPUTER ENG._Abela Scicluna_Maria_2015.pdf Restricted Access | 8.15 MB | Adobe PDF | View/Open Request a copy |
Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.