Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/121368
Title: MIRAI : a modifiable, interpretable, and rational AI decision support system
Authors: Mallia, Natalia (2020)
Keywords: Deep learning (Machine learning)
Algorithms
Industry 4.0
Artificial intelligence
Decision support systems
Issue Date: 2020
Citation: Mallia, N. (2020). MIRAI: a modifiable, interpretable, and rational AI decision support system (Master’s dissertation).
Abstract: With the recent advancements and results obtained by Deep Learning, several corporations are eager to begin incorporating these algorithms into their workflow to benefit from these systems, especially with the emergence of Industry 4.0. However, decision makers using these systems find themselves unable to fully trust the AI from evaluation metrics alone, and require some more transparency in the rationale behind their systems. As such, research has gone in the direction of Explainable AI (XAI), where the creation of explainable algorithms or reverse-engineering of existing algorithm takes place to open up the Black Box of opaque AI algorithms. Both approaches present the outcome in a manner which is interpretable by humans. In this research project, we proposed an Explainable AI architecture for predictive analysis in industry. We intended to utilize a novel approach of combining the Rule-Based Reasoning methodology of the Differentiable Inductive Logic Programming (δILP) algorithm with an explainable Machine Learning (ML) algorithm, a Bidirectional Long-Short Term Memory (BiLSTM) Neural-Network (NN). The combination of these algorithms created a fully explainable system capable of a higher level of reasoning, or ’Deep Understanding’. In turn, this implementation of Deep Understanding allowed us to have more reliable and faithful explanations for the given application. Quantitative evaluation for this system took place by means of standard Machine Learning evaluation metrics such as F1-Scores, Precision, Recall, and Receiver Operating Characteristic (ROC) Curves. Our BiLSTM performed with an average of 85% over several metrics, and δILP performed at over 95%. We further evaluated our system by taking the derived interpretations, transforming them into English explanations via the inferences and deductions stored in a standardized Knowledge Base. We verified them with industry professionals to determine whether the deductions made sense in a practical context or not. From this, we understood that a combination of values in the acceleration and rotation in either the X and Y axis exclusively may lead to an error. Highlighting these features in an explanation and sorting them by their strength gives technicians an idea of what solution to apply when met with this explanation, saving time in deconstructing the problem when met with it, and in turn improving Overall Equipment Efficiency (OEE). In future work, we would enhance these explanations by moving our solution to prescriptive maintenance, where we also highlight possible solutions for the error indicated by MIRAI.
Description: M.SC.ARTIFICIAL INTELLIGENCE
URI: https://www.um.edu.mt/library/oar/handle/123456789/121368
Appears in Collections:Dissertations - FacICT - 2020
Dissertations - FacICTAI - 2020

Files in This Item:
File Description SizeFormat 
Natalia_Mallia.pdf
  Restricted Access
6.63 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.