Please use this identifier to cite or link to this item:
https://www.um.edu.mt/library/oar/handle/123456789/92216
Title: | Stairway detection from aerial imagery |
Authors: | Grech Fleri Soler, Joy (2021) |
Keywords: | Data sets -- Malta Computer vision -- Malta Machine learning Deep learning (Machine learning) Pattern recognition systems |
Issue Date: | 2021 |
Citation: | Grech Fleri Soler, J. (2021). Stairway detection from aerial imagery (Bachelor’s dissertation). |
Abstract: | Object detection is an active sector of research in computer vision. Object detection from aerial imagery is a part of object detection with an added element of complexity, due to image resolution, clarity and overall object sizes. The specific task of detecting stairways from aerial imagery has not yet been addressed. This study poses a collection of unique challenges. The challenges are those faced by aerial imagery paired with fluctuating zoom levels and the inclusion of shaded imagery. This study has two components: dataset compilation and experiments. The dataset is composed of over 10,000 images of contrasting-sized stairways from around the Maltese archipelago. The experiments were specifically designed to test the integrity of the dataset. The first conducted experiment compared the mean Average Precision (mAP) value between a machine-learning and two deep learning models. The HAAR Detector was able to reach a mAP of 60%, whereas YOLOv4 and Detectron2, which were the two deep-learning models used, managed to score mAP values of 65.6% and 72.6% respectively. To further evaluate the integrity and robustness of the dataset, the zoom level experiment was conducted. The YOLOv4 algorithm trained on two different varieties of zoom levels. The first variety focused on training a combination of two different zoom levels and testing on the three zoom levels chosen for this study, whereas the second variety trained and was tested on all three zoom levels. The first variety had a resultant mAP of 24.48% on “low” altitude zoom levels and a mAP of 49.19% on “high” altitude zoom levels. The second variety, where YOLOv4 was trained and tested on all three zoom levels, yielded a mAP of 46.25%. Shade and shadows pose a significant challenge when it comes to different computer vision tasks, so another one of the experiments tackled dataset modifications pertaining to shaded imagery. Making use of YOLOv4 and a dataset containing shaded imageryin the testing set only, resulted in a mAP of 74.16% and imprecise predictions on shaded imagery. Alternatively, when shaded imagery was present throughout the denominations of the dataset, the mAP equated to 49.09%. |
Description: | B.Sc. IT (Hons)(Melit.) |
URI: | https://www.um.edu.mt/library/oar/handle/123456789/92216 |
Appears in Collections: | Dissertations - FacICT - 2021 Dissertations - FacICTAI - 2021 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
21BITAI027.pdf Restricted Access | 88.78 MB | Adobe PDF | View/Open Request a copy |
Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.