Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/120598
Title: Deep reinforcement learning for football player decision analysis
Authors: Pulis, Michael (2023)
Keywords: Soccer -- Decision making
Deep learning (Machine learning)
Reinforcement learning
Issue Date: 2023
Citation: Pulis, M. (2023). Deep reinforcement learning for football player decision analysis (Master's dissertation).
Abstract: Analysis of a football player’s decision-making process often relies heavily on easily interpretable statistics such as the goals scored, and the assists provided by the player. While these statistics are useful, relying solely on them leads to more nuanced high-level performances being overlooked. This is because results-based analysis does not account for the ever-present role of luck that distorts the outcome. A team can consistently generate higher-quality goal scoring opportunities than their opponents throughout a match, but still end up losing due to unfortunate finishing, or an outstanding goalkeeping display by the opponents. Recent advances in statistical analysis of football events have yielded more objective metrics such as Expected Goals (xG) and Expected Threat (xT) to address this problem. These metrics have been used to develop Possession Value Models (PVMs) that can be used to evaluate the decision making within players. However, these models do not take into account the context within which actions were made, since they rely solely on event data about the actions itself. To evaluate player decisions objectively in context, we propose a novel model which we call Decision Value (DV), generated through offline Deep Reinforcement Learning. This model was trained on a dataset of past matches, consisting of the actions performed by elite-level football players. The dataset consists of both event data and also tracking data, which provides the coordinates of the teammates and opposition players. This data was pre-processed and augmented further into a new dataset, which incorporates the details of the actions and the coordinates of the teammates and opposition players, together with the reward obtained as a result of the action. Having such a richer dataset allowed the model to learn to evaluate decisions within the context that they were made. The IQL algorithm was used to perform offline reinforcement learning.
Description: M.Sc.(Melit.)
URI: https://www.um.edu.mt/library/oar/handle/123456789/120598
Appears in Collections:Dissertations - FacICT - 2023
Dissertations - FacICTAI - 2023

Files in This Item:
File Description SizeFormat 
2319ICTICS520005065286_1.PDF8.75 MBAdobe PDFView/Open


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.