VR & Mental Health
Title of study: Simulating Schizophrenia Through VR and AI
Period of study: 2019-2021
Lead Academic/s: Dr Vanessa Camilleri, Prof. Alexiei Dingli
Student: Mr Andrew Cachia (M.Sc.)
Brief description: This is a joint project between the Department of Artificial Intelligence and the Department of Mental Health Nursing, of the University of Malta. The aim of this project is to use Virtual Reality and Artificial Intelligence to immerse the player in a virtual world and walk the user through the symptoms that a person with Schizophrenia would experience. The application is designed as a serious VR game, where the user will attempt to perform simple tasks, but will face challenges due to the effects of schizophrenia. Artificial Intelligence is used to control the environment around the player, and control the interaction between the player and environment. The AI is responsible for analyzing the user’s voice and input commands, and causing the environment to react.
VR & Pain Management
Title of study: Real-time Adaptive Virtual Reality for Pain Reduction
Period of study: 2018-2021
Lead academic: Prof. Alexiei Dingli
Student: Mr Luca Bondin
Brief description: Virtual Reality has been proven to be an effective tool in helping patients cope with pain through distraction. Past studies have shown that patients who made use of Virtual Reality reported drops in pain scores of as much as 50%. However, what has already been implemented often assumes a one size fits all approach which introduces a risk in that the technology might not be suitable for all patients or that the method of therapy using VR might not be as effective across different patient groups. This research therefore looks at tackling these risks. Through the use of Virtual Reality and Artificial Intelligence we will provide patients with a game that is able to adapt to the patient’s affective state in real time. We are combining three concepts synonymous with Artificial Intelligence; Affective Computing, Serious Games and Virtual Reality. The aim is to provide patients with an adaptive game that is able to react to human emotions, to keep the patient constantly engaged with what is happening in the game thus making it less likely that attention will shift to the pain symptoms one would normally feel.
Further Notes: The project also has the backing of the Sir Anthony Mamo Oncology Hospital (SAMOC), and has also received backing from the Stanford Virtual Human Interaction Lab.
AR & Art - Alice in Wonderland
Title of study: Living Stories - An Artistic Narrative in Augmented Reality
Period of study: 2018-2020
Lead academic: Dr Vanessa Camilleri
Student/researchers: Mr David Vella, Ms Deborah Vella, Mr Gabriel Camilleri, Valeria Holomjova
Brief description: In this study we explore a different design approach to AR interfaces involving narrative and storytelling. We explore people’s perspectives on the role of Augmented Reality in enhancing artistic experiences, and enriching the narratives exposed by the artists through their artistic work. This is a works-in-progress study that will aim to make use of qualitative research methods and techniques to gain more insights into the design for human-computer interaction for AR and Mixed Reality interfaces.
AR & Art - Greek Mythology
Title of study: MythArt - Revealing Messages in Greek Mythology
Period of study: 2018-2019
Lead academic: Dr Vanessa Camilleri
Student/researchers: Mr Alessandro Sammut (B.Sc.), Mr Daniel Mallia (B.Sc.), Mr Dylan Agius (B.Sc.), Mr Georg Scheeberger (B.Sc.)
Brief description: In this study, we explore the extent of the effect of Augmented Reality on user interaction with Greek Mythology Art. Users download the mobile app to be able to view a set of 5 posters showing different instances of artwork depicting greek mythology. This study is meant to understand more about the effect of AR on the user interaction with art. We ask questions such as: Does it stimulate users to interact more with the art? Does it give the users a different perspective on the way they see art through augmented reality? What is the user’s perception of the use of AR to augment the artist’s expression of the message. A qualitative study is currently underway to attempt a deeper insight into the use of AR in Art for the general public use.
Eye Tracking Art
Title of study: Eye Tracking Art
Period of study: 2017-2018
Lead academics: Prof. Vince Briffa, Dr Vanessa Camilleri
Student/researchers: Mr Matthew Attard (MA), Mr Neil Mizzi (BSc)
Brief description: This is a joint project between the Department of Artificial Intelligence and the Department of Digital Arts of the University of Malta. In this project we investigate answers to questions such as what do people focus on when viewing line sculptures? How do they see the art taking form? Can we understand their perception of the art through the movements of their eyes and the points where they focus the most? We use Eye Tracking technology to capture eye movement data to understand the users’ perceptions as they look at line art, in order to investigate the interpretation that the users are giving to the art they are seeing.
AR, Eye Tracking & Art
Title of study: Fourtoni: the Virtual Sculpture
Period of study: 2017-2018
Lead academics: Prof. Vince Briffa, Dr Vanessa Camilleri
Student/researchers: Mr Matthew Attard (M.A.), Mr Matyou Galea (M.A., Ph.D.), Mr Alessandro Sammut (B.Sc.), Mr Daniel Mallia (B.Sc.), Mr Dylan Agius (B.Sc.)
Brief description: This is a joint project between the Department of Artificial Intelligence and the Department of Digital Arts of the University of Malta. Fourtoni is an Augmented Reality application that makes use of audience eye tracking data in order to recreate a fourth Triton from the existing three tritons in Vincent Apap’s Triton Fountain located in Triton Square, Valletta.
The virtual sculpture was launched on an Android platform on 28 September 2018 as part of the Science in the City Festival 2018. Fourtoni is a collaboration between Matthew Attard and Matthew Galea from the Department of Digital Arts, together with Dr Vanessa Camilleri from the Department of Artificial Intelligence. The virtual sculpture’s content was driven by research concerning the combination of the cortical homunculus representation of our body in our brain, and eye-tracking results involving free gazing.
VR & Games
Title of study: An Escape Room in VR: Escape the Virtual Room
Period of study: 2017-2018
Lead academics: Dr Vanessa Camilleri, Prof. Alexiei Dingli
Student/researcher: Ms Natalia Mallia (BSc)
Brief description: This VR Mobile Application uses Genetic Algorithms to generate random assets in a virtual room, as clues to help the user escape from it, in a typical escape room style. Escape rooms have gathered popularity as fun and immersive team building activities. This escape room in VR has been designed and developed in a single user mode, however future work in this area includes a design for multiple players to simulate real life physical experiences in the virtual reality world.
AR & Cultural Tourism
Title of study: Sit(y): An Augmented Historical Experience
Period of study: 2017-2018
Lead academic: Dr Vanessa Camilleri
Student/researchers: Anne-Marie Camilleri (BSc), Lara Caruana Montaldo (BSc), Lizzy Farrugia (BSc)
Brief description of the study: The scope of this project is to investigate the use of location-based Simulation in Augmented Reality on the user experience of historical events. In this project we explore the design and techniques involved in re-creating the entry of the SS Ohio cargo ship in the Maltese Grand Harbour during WWII. This was a historic event for Malta as the arrival of this last ship of a convoy most of which was sunk on its way here marked hope for the Maltese. Users visiting the Upper Barrakka Gardens site in Valletta overlooking the Grand Harbour would be able to make use of this mobile-based app to view the ship as it makes its entry into port, and experience the atmosphere as recorded happening on the day from that same location. The app also contains additional historical information.
VR & Autism
Title of study: Walking in Small Shoes
Period of study: 2016-2018
Lead academics: Dr Vanessa Camilleri, Prof. Vince Briffa, Prof. Alexiei Dingli, Prof. Matthew Montebello
Student/researchers: Ms Steffi De Martino (M.A.), Mr Joseph Camilleri (M.A.), Mr Foaad Haddod (M.Sc.)
Brief description of the study: This is a joint project between the Department of Artificial Intelligence and the Department of Digital Arts, of the University of Malta. In this study we investigate the role of Virtual Reality (VR) on the development of empathy through CPD. A mobile VR application is designed and developed for primary school teachers to help them empathise with some of the hurdles experienced by children having traits of autism. It is known that with conditions such as autism, fully empathising with an individual without having gone through the same experiences is not easy, and this becomes more difficult when it is within a classroom context. This mobile app featuring a realistic 360 degree interactive VR launches adult teachers into the world of young children with autism as they struggle in the daily classroom activities.
VR & Multiculturalism
Title of study: Understanding Migrants’ Lives in VR
Period of study: 2016-2018
Lead academics: Dr Vanessa Camilleri, Prof. Vince Briffa, Prof. Alexiei Dingli, Prof. Matthew Montebello
Student/researchers: Mr Joseph Camilleri (MA), Mr David Scicluna (MSc)
Brief description: This is a joint project between the Department of Artificial Intelligence and the Department of Digital Arts, of the University of Malta. In this study we have designed and developed a mobile VR app for a greater understanding of the 3rd country national migrants coming to Malta, at times illegally via boats crossing the Mediterranean sea. The realistic 360degree film which has been created as part of the narratives unfolding is presented to the viewer as an immersive film, where the user can feel as though he/she is taking part in the story. The scope is that of aiding a greater empathy towards people who risk their lives in search of a better one in our country.
VR & Orchestra
Title of study: Understanding the impact of VR on Classical Music Appreciation
Period of study: 2016-2017
Lead academics: Dr Vanessa Camilleri, Prof. Alexiei Dingli
Student/researcher: Mr Joseph Camilleri (MA)
Brief description: This is a joint project between the Department of Artificial Intelligence, the Department of Digital Arts, of the University of Malta and the Malta Philharmonic Orchestra. In this study we have designed and developed a mobile VR app with the scope of understanding the impact of VR on classical music appreciation. The use of VR to help bring home music or art to a diverse audience is not a new endeavour but one that is gathering greater popularity. Gaining deeper insights into the perceptions of the users as they use immersive technologies, would help the design of HCI models for the application of AI to the creative fields.
Large-Scale Actionable Interesting Pattern Mining
Brief description: As data continues to grow exponentially, the need for technologies and solutions to effectively handle and understand such data is continuously increasing. As a result, this project will seek to introduce a framework capable of processing multiple data sources at a large-scale to extract actionable and interesting patterns. Such interesting patterns would not only help understand the underlying structure of the data but also capable of providing intelligent predictions through smart analytics. Through the extraction of these actionable and interesting patterns, one can understand how the collected data correlates to some target objective, and how the manipulation of such features can influence our goal. Hence, with this framework, one would not only be able to learn and understand the present but would also be equipped with the necessary knowledge to control the future through automation.
Reserchers/students: Dr. Charlie Abela and Dr. Lalit Garg / David Farrugia
Identification of Alien Objects Underwater
From the earliest sailors centuries ago to the present day, the human fascination with the deep blue seas has never waned. The technology available to humans today is far different from what was available to our ancestors thousands of years ago, but understanding the oceans still contains challenges. The task of underwater object detection is one such area which is fraught with difficulties. Underwater environments vary greatly from one another, with each environment holding its own unique inherent qualities and features. Neural Networks and Deep Learning approaches have proven their capabilities in in-air imagery and, as a result, this has sparked an interest to train these same models and approaches for use on underwater images. However, collecting a large enough dataset is a tedious task which is often deemed infeasible.
Furthermore, attempting to train a model on a small sample size will lead to over-fitting. Overcoming these challenges would prove useful for a variety of different fields ranging from the environmental, through ocean cleanups, the economical, through pipeline inspections, and the historical, through underwater archaeology, along with various other fields. To overcome the problem of over-fitting, the approach taken in this project was to use a transfer learning technique, with the argument that Convolutional Neural Networks are not only classifiers but are also feature extractors. Hence, a CNN trained on a large dataset of in-air images will be sufficient enough to classify objects in underwater scenes after some fine-tuning using images taken underwater since the pre-trained model will already be sensitive to information such as colours, textures and edges. Mask R-CNN is the chosen model used for this project and achieved a Mean Average Precision of 0.509.
Student/researchers: Ms Stephanie Chetcuti / Prof. Matthew Montebello
Drone-based Search
Over the past few years Unmanned Aerial Vehicles (UAVs) such as drones have evolved and gone through great advances both in miniaturisation of hardware technologies as well as ever-increasing computational power. This being said, present times have also seen a rise in confidence when using robotics and artificial intelligence in emergency situations such as hospital operations and life-risking procedures. All this, in addition to the daily acquisition of aerial imagery encourages the field of computer vision to take on the challenge of processing UAV live video feed in real-time.
This project evaluates efficient approaches that can be used for a Drone Based search mostly focusing on a search and rescue aspect, meaning that the object in search is a person. It starts off with the creation of a custom object detection model and continues with some tests comparing it with other state-of-the-art object detection models that outperform in a certain attribute of importance to real-time object detection such as detection accuracy and processing speed. The drone in subject is a Tello EDU which although it has a short battery life of around 13 minutes, offers the possibility of Python coding which is a necessity in most areas of computer vision. This setup will provide real-time video stream and communicate it directly to a receiving system which processes it and displays on screen. Its evaluation will undertake field tests over a set environment where it will be tested for real-time image processing by recording the average fps and a general evaluation to the result accuracy. This project also shows how a modular design and implementation can result in easy to manipulate code which creates the possibility for branching projects with just a few adjustments, like an indoor search drone that will be able to search for personal belongings in a home environment while hovering around the rooms.
Student/researchers: Mr Daniel Mallia / Prof. Matthew Montebello
Speech Recognition
Title of study: MASRI - Maltese Speech Recognition
Period of study: 2019-2020
Main academics: Dr Claudia Borg, Dr Albert Gatt & Dr Andrea Demarco
Brief description: This project will deliver the first automatic speech recognition system for the Maltese language. The scientific contributions of this project are twofold: (a) harnessing state-of-the-art machine learning methods for low-resource languages, of which Maltese is an example; (b) dealing computationally with code-switching, an important feature of bilingual settings such as Malta, where real-time speech routinely involves switching between English and Maltese. Hence, we aim to develop new standards for complex linguistic settings, applicable to other scenarios. From a practical perspective, this project will deliver a reusable platform for speech recognition for Maltese with implemented solutions for handling realistic, noisy dialogue.
Deep Learning & Traffic Optimisation
Title of Study: Using Deep Learning to Optimise Road Network Traffic Management
Period of study: 2019-2021
Lead academic: Prof. Alexiei Dingli
Student/researcher: Mr Keith Mintoff (MSc)
Brief description: Traffic is a daily problem that most commuters have to deal with on a daily basis. The cost of traffic congestion in the EU is estimated to total to 1% of the EU’s GDP and is one of the leading causes of pollution in cities. Furthermore, building additional road infrastructure is costly, and not always possible due to space constraints. Therefore, in this research project we explore the use of artificial intelligence techniques to make more efficient use of the existing infrastructure. The aim of this research is to devise an AI model which can be used to discover optimal policies for existing traffic control mechanisms that minimise traffic congestion for a given time frame based on road-level traffic statistics such as the average speed of vehicles.
Various traffic controls are considered for optimisation, with the primary controls being the traffic lights and the speed limits of the roads on the simulated network. A number of deep learning models are explored as a means to compute the ‘optimal’ policy set for a given road network based on the traffic statistics. Prior work has shown that deep learning techniques to be very effective in discovering optimal policies. To test the effectiveness of the models they are deployed on a road network simulation, built in the form of a game. The models are able to take in as input the average traffic statistics per road and are able to alter the ‘settings’ of each traffic control mechanism with the ultimate goal of minimising traffic congestion in the long run.