Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/102363
Title: RankNEAT : outperforming stochastic gradient search in preference learning tasks
Authors: Pinitas, Kosmas
Makantasis, Konstantinos
Liapis, Antonios
Yannakakis, Georgios N.
Keywords: Artificial intelligence
Human-computer interaction
Neural networks (Computer science)
Genetic algorithms
Computer games
Issue Date: 2022
Publisher: Association for Computing Machinery
Citation: Pinitas, K., Makantasis, K., Liapis, A. & Yannakakis, G. N. (2022). RankNEAT : outperforming stochastic gradient search in preference learning tasks. Genetic and Evolutionary Computation Conference 2022 (GECCO '22), Boston. 1084-1092.
Abstract: Stochastic gradient descent (SGD) is a premium optimization method for training neural networks, especially for learning objectively defined labels such as image objects and events. When a neural network is instead faced with subjectively defined labels - such as human demonstrations or annotations - SGD may struggle to explore the deceptive and noisy loss landscapes caused by the inherent bias and subjectivity of humans. While neural networks are often trained via preference learning algorithms in an effort to eliminate such data noise, the de facto training methods rely on gradient descent. Motivated by the lack of empirical studies on the impact of evolutionary search to the training of preference learners, we introduce the RankNEAT algorithm which learns to rank through neuroevolution of augmenting topologies. We test the hypothesis that RankNEAT outperforms traditional gradient-based preference learning within the affective computing domain, in particular predicting annotated player arousal from the game footage of three dissimilar games. RankNEAT yields superior performances compared to the gradient-based preference learner (RankNet) in the majority of experiments since its architecture optimization capacity acts as an efficient feature selection mechanism, thereby, eliminating overfitting. Results suggest that RankNEAT is a viable and highly efficient evolutionary alternative to preference learning.
URI: https://www.um.edu.mt/library/oar/handle/123456789/102363
Appears in Collections:Scholarly Works - InsDG

Files in This Item:
File Description SizeFormat 
RankNEAT_Outperforming_Stochastic_Gradient_Search_in_Preference_Learning_Tasks_2022.pdf
  Restricted Access
990.63 kBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.