Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/92135
Full metadata record
DC FieldValueLanguage
dc.date.accessioned2022-03-24T09:02:17Z-
dc.date.available2022-03-24T09:02:17Z-
dc.date.issued2021-
dc.identifier.citationBonnici, J. O. (2021). Learning the game (Bachelor’s dissertation).en_GB
dc.identifier.urihttps://www.um.edu.mt/library/oar/handle/123456789/92135-
dc.descriptionB.Sc. IT (Hons)(Melit.)en_GB
dc.description.abstractVideo Games nowadays are some of the most entertaining mediums available on the market, not only as the base game but also the competitive scene which these games bring with them. Such competitive games include: Dota 2, League of Legends, Rocket League, Counter Strike, Street Fighter V and StarCraft 2 are among some of the competitive games for esports with cash pots being in the millions. Fighting games such Street Fighter, Mortal Kombat and Tekken are one of the most competitive genres in which players spends hours practising combos and playing online matches versus other people. Artificial Intelligence (AI) has been used multiple times to defeat the single-player campaigns offered by these games. In this thesis, the main aim is to create an AI agent capable of defeating the campaign of Street Fighter 2 a retro 2D Fighting Game. The Agent will furthermore fight against itself in order for it to better itself with each cycle. These AI agents will eventually become more difficult to beat than the bots offered by the game. Professional players can make use of such agents as a new challenge for them to improve themselves by fighting new formidable opponents. Moreover, these AI agents could also be made to continue training when fighting these professional players. Tests will also be made to check the adaptability (generalisation) of these bots, and how good will an agent perform against characters he has not faced previously. This is due to the fact that the agent is only trained against two characters: the first character which faces when training to beat the game and having another character which is the same as the agent due to fighting against itself. Tests carried out indicate that the agent performed incredibly well when fighting against characters not trained against using the character trained on. On the other hand, the agent performed well when using characters not trained on, managing to beat half of the game although on the whole it did not perform nearly as good as the agent using its designated character.en_GB
dc.language.isoenen_GB
dc.rightsinfo:eu-repo/semantics/restrictedAccessen_GB
dc.subjectVideo gamesen_GB
dc.subjectArtificial intelligenceen_GB
dc.subjectReinforcement learningen_GB
dc.subjectNeural networks (Computer science)en_GB
dc.titleLearning the gameen_GB
dc.typebachelorThesisen_GB
dc.rights.holderThe copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder.en_GB
dc.publisher.institutionUniversity of Maltaen_GB
dc.publisher.departmentFaculty of ICT. Department of Artificial Intelligenceen_GB
dc.description.reviewedN/Aen_GB
dc.contributor.creatorBonnici, Jake Owen (2021)-
Appears in Collections:Dissertations - FacICT - 2021
Dissertations - FacICTAI - 2021

Files in This Item:
File Description SizeFormat 
21BITAI013.pdf
  Restricted Access
3.6 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.