Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/120598
Full metadata record
DC FieldValueLanguage
dc.date.accessioned2024-04-09T12:02:22Z-
dc.date.available2024-04-09T12:02:22Z-
dc.date.issued2023-
dc.identifier.citationPulis, M. (2023). Deep reinforcement learning for football player decision analysis (Master's dissertation).en_GB
dc.identifier.urihttps://www.um.edu.mt/library/oar/handle/123456789/120598-
dc.descriptionM.Sc.(Melit.)en_GB
dc.description.abstractAnalysis of a football player’s decision-making process often relies heavily on easily interpretable statistics such as the goals scored, and the assists provided by the player. While these statistics are useful, relying solely on them leads to more nuanced high-level performances being overlooked. This is because results-based analysis does not account for the ever-present role of luck that distorts the outcome. A team can consistently generate higher-quality goal scoring opportunities than their opponents throughout a match, but still end up losing due to unfortunate finishing, or an outstanding goalkeeping display by the opponents. Recent advances in statistical analysis of football events have yielded more objective metrics such as Expected Goals (xG) and Expected Threat (xT) to address this problem. These metrics have been used to develop Possession Value Models (PVMs) that can be used to evaluate the decision making within players. However, these models do not take into account the context within which actions were made, since they rely solely on event data about the actions itself. To evaluate player decisions objectively in context, we propose a novel model which we call Decision Value (DV), generated through offline Deep Reinforcement Learning. This model was trained on a dataset of past matches, consisting of the actions performed by elite-level football players. The dataset consists of both event data and also tracking data, which provides the coordinates of the teammates and opposition players. This data was pre-processed and augmented further into a new dataset, which incorporates the details of the actions and the coordinates of the teammates and opposition players, together with the reward obtained as a result of the action. Having such a richer dataset allowed the model to learn to evaluate decisions within the context that they were made. The IQL algorithm was used to perform offline reinforcement learning.en_GB
dc.language.isoenen_GB
dc.rightsinfo:eu-repo/semantics/openAccessen_GB
dc.subjectSoccer -- Decision makingen_GB
dc.subjectDeep learning (Machine learning)en_GB
dc.subjectReinforcement learningen_GB
dc.titleDeep reinforcement learning for football player decision analysisen_GB
dc.typemasterThesisen_GB
dc.rights.holderThe copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder.en_GB
dc.publisher.institutionUniversity of Maltaen_GB
dc.publisher.departmentFaculty of Information and Communication Technology. Department of Artificial Intelligenceen_GB
dc.description.reviewedN/Aen_GB
dc.contributor.creatorPulis, Michael (2023)-
Appears in Collections:Dissertations - FacICT - 2023
Dissertations - FacICTAI - 2023

Files in This Item:
File Description SizeFormat 
2319ICTICS520005065286_1.PDF8.75 MBAdobe PDFView/Open


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.