Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/92212
Full metadata record
DC FieldValueLanguage
dc.date.accessioned2022-03-24T13:47:37Z-
dc.date.available2022-03-24T13:47:37Z-
dc.date.issued2021-
dc.identifier.citationPulis, M. (2021). Music recommendation system (Bachelor’s dissertation).en_GB
dc.identifier.urihttps://www.um.edu.mt/library/oar/handle/123456789/92212-
dc.descriptionB.Sc. IT (Hons)(Melit.)en_GB
dc.description.abstractTraditional music recommendation techniques are based on the concept of collaborative filtering (CF) which leverages the listening patterns of many users. While this technique is very effective, it is not able to recommend new songs or artists since there is no listening history to draw upon. This research addresses the issue of music recommendation for novel songs and artists by making recommendations which are solely based on the audio content rather than metadata such as genre, artist, or using the listening histories of other users. We propose a similarity metric based solely on raw audio content, which is then used to recommend songs that are close to the user’s library, based on this similarity function. The similarity metric was created by training a Siamese Neural Network (SNN) on a dataset of similar and dissimilar song pairs. Each song was first converted into a Mel-Spectrogram to obtain its bitmap representation. The SNN consists of two identical CNNs which are fed the Mel-Spectrogram bitmap of each song pair. By training the model on this dataset of song image pairs, the SNN learns to act as a similarity metric between songs based, on the raw audio content. The SNN model achieved an accuracy score of 81.64% on the test set. A query-by-multiple-examples based music recommendation system was created, which makes use of the developed similarity metric to make music recommendations. In order to evaluate the performance of our proposed system, a survey website was developed. Survey participants first create a preference set containing songs they enjoy, after which they proceed to rate recommendations. A naive genre-based baseline system was also implemented, and the recommendations made by both systems are interleaved randomly, after which the user rates a single list of songs. The survey is a blind-use study, thus participants are not aware of the fact that two systems are being used, which helps to eliminate possible biases. The findings showed that the majority of the participants preferred the recommendations made by the proposed system, which also received significantly higher ratings than the baseline system, whilst recommending songs that are less popular than those recommended by the baseline system.en_GB
dc.language.isoenen_GB
dc.rightsinfo:eu-repo/semantics/restrictedAccessen_GB
dc.subjectNeural networks (Computer science)en_GB
dc.subjectData setsen_GB
dc.subjectDeep learning (Machine learning)en_GB
dc.subjectMusic -- Computer network resourcesen_GB
dc.subjectRecommender systems (Information filtering)en_GB
dc.titleMusic recommendation systemen_GB
dc.typebachelorThesisen_GB
dc.rights.holderThe copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder.en_GB
dc.publisher.institutionUniversity of Maltaen_GB
dc.publisher.departmentFaculty of ICT. Department of Artificial Intelligenceen_GB
dc.description.reviewedN/Aen_GB
dc.contributor.creatorPulis, Michael (2021)-
Appears in Collections:Dissertations - FacICT - 2021
Dissertations - FacICTAI - 2021

Files in This Item:
File Description SizeFormat 
21BITAI032.pdf
  Restricted Access
3.64 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.