Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/65403
Full metadata record
DC FieldValueLanguage
dc.date.accessioned2020-12-09T10:33:29Z-
dc.date.available2020-12-09T10:33:29Z-
dc.date.issued2019-
dc.identifier.citationFarrugia, G. (2019). Discovering decisions in neural networks (Bachelor's dissertation).en_GB
dc.identifier.urihttps://www.um.edu.mt/library/oar/handle/123456789/65403-
dc.descriptionB.SC.(HONS)COMP.SCI.en_GB
dc.description.abstractNeural networks are typically regarded as black box models due to the complexity of their hidden layers. Recent advances in classification problems using neural networks are hindered by their applicability in solving real world problems due to their lack of explainability. In critical scenarios where a simple decision is not enough, reasons to back up each decision are required and reliability comes into play. Here, we used a spatial relation dataset of geometric, language and depth features to train a neural network to predict spatial prepositions. We attempted to extract explanations by using Layerwise Relevance Propagation (LRP) on the trained model to generate relevance measures for individual inputs over positive instances. This technique redistributes relevance at each layer in the network, starting from the output and ending with the input layer relevance measures. The resulting feature relevance measures are treated as explanations as they are indicators of feature contributions towards the network’s prediction. Since explanations proved to be somewhat biased when pooling feature relevances, a baseline explanation was generated as an indicator of global input relevance for the model. Improved explanations were created by taking the difference of individual explanations from the baseline explanation to produce explanations by variation (from the baseline). The feature contribution measures obtained for each spatial preposition were qualitatively evaluated to check if explanations followed intuition. The results showed that the explanation techniques used provided different feature rankings but showed concurrence for the most relevant feature.en_GB
dc.language.isoenen_GB
dc.rightsinfo:eu-repo/semantics/restrictedAccessen_GB
dc.subjectNeural networks (Computer science)en_GB
dc.subjectFeedforward control systemsen_GB
dc.subjectMachine learningen_GB
dc.titleDiscovering decisions in neural networksen_GB
dc.typebachelorThesisen_GB
dc.rights.holderThe copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder.en_GB
dc.publisher.institutionUniversity of Maltaen_GB
dc.publisher.departmentFaculty of Information and Communication Technology. Department of Computer Scienceen_GB
dc.description.reviewedN/Aen_GB
dc.contributor.creatorFarrugia, Gabriel (2019)-
Appears in Collections:Dissertations - FacICT - 2019
Dissertations - FacICTCS - 2019

Files in This Item:
File Description SizeFormat 
19BCS004 - Farrugia Gabriel.pdf
  Restricted Access
2.05 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.