Please use this identifier to cite or link to this item:
https://www.um.edu.mt/library/oar/handle/123456789/120598
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.date.accessioned | 2024-04-09T12:02:22Z | - |
dc.date.available | 2024-04-09T12:02:22Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | Pulis, M. (2023). Deep reinforcement learning for football player decision analysis (Master's dissertation). | en_GB |
dc.identifier.uri | https://www.um.edu.mt/library/oar/handle/123456789/120598 | - |
dc.description | M.Sc.(Melit.) | en_GB |
dc.description.abstract | Analysis of a football player’s decision-making process often relies heavily on easily interpretable statistics such as the goals scored, and the assists provided by the player. While these statistics are useful, relying solely on them leads to more nuanced high-level performances being overlooked. This is because results-based analysis does not account for the ever-present role of luck that distorts the outcome. A team can consistently generate higher-quality goal scoring opportunities than their opponents throughout a match, but still end up losing due to unfortunate finishing, or an outstanding goalkeeping display by the opponents. Recent advances in statistical analysis of football events have yielded more objective metrics such as Expected Goals (xG) and Expected Threat (xT) to address this problem. These metrics have been used to develop Possession Value Models (PVMs) that can be used to evaluate the decision making within players. However, these models do not take into account the context within which actions were made, since they rely solely on event data about the actions itself. To evaluate player decisions objectively in context, we propose a novel model which we call Decision Value (DV), generated through offline Deep Reinforcement Learning. This model was trained on a dataset of past matches, consisting of the actions performed by elite-level football players. The dataset consists of both event data and also tracking data, which provides the coordinates of the teammates and opposition players. This data was pre-processed and augmented further into a new dataset, which incorporates the details of the actions and the coordinates of the teammates and opposition players, together with the reward obtained as a result of the action. Having such a richer dataset allowed the model to learn to evaluate decisions within the context that they were made. The IQL algorithm was used to perform offline reinforcement learning. | en_GB |
dc.language.iso | en | en_GB |
dc.rights | info:eu-repo/semantics/openAccess | en_GB |
dc.subject | Soccer -- Decision making | en_GB |
dc.subject | Deep learning (Machine learning) | en_GB |
dc.subject | Reinforcement learning | en_GB |
dc.title | Deep reinforcement learning for football player decision analysis | en_GB |
dc.type | masterThesis | en_GB |
dc.rights.holder | The copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder. | en_GB |
dc.publisher.institution | University of Malta | en_GB |
dc.publisher.department | Faculty of Information and Communication Technology. Department of Artificial Intelligence | en_GB |
dc.description.reviewed | N/A | en_GB |
dc.contributor.creator | Pulis, Michael (2023) | - |
Appears in Collections: | Dissertations - FacICT - 2023 Dissertations - FacICTAI - 2023 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2319ICTICS520005065286_1.PDF | 8.75 MB | Adobe PDF | View/Open |
Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.