Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/121368
Full metadata record
DC FieldValueLanguage
dc.date.accessioned2024-04-25T13:26:21Z-
dc.date.available2024-04-25T13:26:21Z-
dc.date.issued2020-
dc.identifier.citationMallia, N. (2020). MIRAI: a modifiable, interpretable, and rational AI decision support system (Master’s dissertation).en_GB
dc.identifier.urihttps://www.um.edu.mt/library/oar/handle/123456789/121368-
dc.descriptionM.SC.ARTIFICIAL INTELLIGENCEen_GB
dc.description.abstractWith the recent advancements and results obtained by Deep Learning, several corporations are eager to begin incorporating these algorithms into their workflow to benefit from these systems, especially with the emergence of Industry 4.0. However, decision makers using these systems find themselves unable to fully trust the AI from evaluation metrics alone, and require some more transparency in the rationale behind their systems. As such, research has gone in the direction of Explainable AI (XAI), where the creation of explainable algorithms or reverse-engineering of existing algorithm takes place to open up the Black Box of opaque AI algorithms. Both approaches present the outcome in a manner which is interpretable by humans. In this research project, we proposed an Explainable AI architecture for predictive analysis in industry. We intended to utilize a novel approach of combining the Rule-Based Reasoning methodology of the Differentiable Inductive Logic Programming (δILP) algorithm with an explainable Machine Learning (ML) algorithm, a Bidirectional Long-Short Term Memory (BiLSTM) Neural-Network (NN). The combination of these algorithms created a fully explainable system capable of a higher level of reasoning, or ’Deep Understanding’. In turn, this implementation of Deep Understanding allowed us to have more reliable and faithful explanations for the given application. Quantitative evaluation for this system took place by means of standard Machine Learning evaluation metrics such as F1-Scores, Precision, Recall, and Receiver Operating Characteristic (ROC) Curves. Our BiLSTM performed with an average of 85% over several metrics, and δILP performed at over 95%. We further evaluated our system by taking the derived interpretations, transforming them into English explanations via the inferences and deductions stored in a standardized Knowledge Base. We verified them with industry professionals to determine whether the deductions made sense in a practical context or not. From this, we understood that a combination of values in the acceleration and rotation in either the X and Y axis exclusively may lead to an error. Highlighting these features in an explanation and sorting them by their strength gives technicians an idea of what solution to apply when met with this explanation, saving time in deconstructing the problem when met with it, and in turn improving Overall Equipment Efficiency (OEE). In future work, we would enhance these explanations by moving our solution to prescriptive maintenance, where we also highlight possible solutions for the error indicated by MIRAI.en_GB
dc.language.isoenen_GB
dc.rightsinfo:eu-repo/semantics/restrictedAccessen_GB
dc.subjectDeep learning (Machine learning)en_GB
dc.subjectAlgorithmsen_GB
dc.subjectIndustry 4.0en_GB
dc.subjectArtificial intelligenceen_GB
dc.subjectDecision support systemsen_GB
dc.titleMIRAI : a modifiable, interpretable, and rational AI decision support systemen_GB
dc.typemasterThesisen_GB
dc.rights.holderThe copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder.en_GB
dc.publisher.institutionUniversity of Maltaen_GB
dc.publisher.departmentFaculty of Information and Communication Technology. Department of Artificial Intelligenceen_GB
dc.description.reviewedN/Aen_GB
dc.contributor.creatorMallia, Natalia (2020)-
Appears in Collections:Dissertations - FacICT - 2020
Dissertations - FacICTAI - 2020

Files in This Item:
File Description SizeFormat 
Natalia_Mallia.pdf
  Restricted Access
6.63 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.