Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/92144
Title: Autonomous drone delivery
Authors: Diacono, Sean (2021)
Keywords: Drone aircraft
Delivery of goods
Deep learning (Machine learning)
Neural networks (Computer science)
Issue Date: 2021
Citation: Diacono, S. (2021). Autonomous drone delivery (Bachelor’s dissertation).
Abstract: The recent increase in e-commerce and demand for quick delivery has encouraged the research and development of aerial delivery methods using quadcopter drones. This increased demand for fast delivery coincides with the recent boom of Artificial Intelligence and drones. Neural Networks and Deep Learning techniques have improved in terms of their effectiveness and accessibility making them ideal candidates for use in autonomous drone delivery systems. For aerial drone delivery to be feasible, the systems they use must guarantee the safety of people, its environment and the drone itself. This implies that these techniques require obstacle avoidance methods and autonomous landing processes. This dissertation proposes an autonomous drone delivery system we call NavAI which makes use of Neural Networks and Deep Learning to implement obstacle avoidance and autonomous landing. NavAI is implemented within Microsoft’s AirSim drone simulator that also handles navigation and low-level control of the drone. The obstacle avoidance system makes use of a depth estimation system called MonoDepth2 to generate depth maps from single colour images captured by the drone’s forward-facing camera. This depth map is then used to check whether an obstacle is in the way of the drone, and if so, commands are sent to the drone to take evasive action. When evaluating this system it was found to complete 94.5% of all trips taken with a velocity of 2m/s. The autonomous drone landing method makes use of a process called semantic image segmentation. This process requires a model to be trained on some dataset to allow it to split an image into different segments to create a segmentation map. Therefore, we trained two semantic segmentation models one using the U-Net architecture and the other using the DeepLabv3+ architecture. The models were trained using the Institute of Computer Graphics and Vision at Graz University of Technology’s Semantic Drone Dataset. The trained models are then used to generate segmentation maps from the images captured by the drone’s bottom facing camera, this segmentation map is then used to check whether the surface beneath the drone is safe. We evaluated the performance of this system and found that the DeepLabv3+ and U-Net models landed the drone safely 75% and 73% of the time, respectively
Description: B.Sc. IT (Hons)(Melit.)
URI: https://www.um.edu.mt/library/oar/handle/123456789/92144
Appears in Collections:Dissertations - FacICT - 2021
Dissertations - FacICTAI - 2021

Files in This Item:
File Description SizeFormat 
21BITAI021.pdf
  Restricted Access
3.88 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.