Please use this identifier to cite or link to this item:
https://www.um.edu.mt/library/oar/handle/123456789/78293
Title: | An objective no-reference video quality assessment metric |
Authors: | Galea, Christian (2014) |
Keywords: | Image processing Imaging systems -- Image quality Digital video Algorithms |
Issue Date: | 2014 |
Citation: | Galea, C. (2014). An objective no-reference video quality assessment metric (Master's dissertation). |
Abstract: | The popularity of video content is increasing rapidly and hence videos have formed a part of people's daily lives. Consequently, this has alleviated the onus on service providers to ensure that the quality of content perceived by the end users is acceptable. Although subjective evaluation is the most accurate form of quality assessment, it is typically cumbersome, time-consuming and expensive. As a result, objective metrics able to capture the quality as perceived by the Human Visual System (HVS) have been proposed in the literature. Most objective Video Quality Assessment (VQA) algorithms currently implemented are either Full-Reference (FR) or Reduced-Reference (RR), referring to the amount of content used from the original video. Unfortunately, in most practical applications no information at all from the reference video may be used. Hence, this project focused on the design of a No-Reference (NR) metric which evaluates quality only on the video under test. The proposed approach can be sub-divided into a metric which computes spatial quality and a metric that evaluates temporal degradations. Both algorithms are based on the fact that the statistics of natural scenes are regular on pristine content but are modified in the presence of distortion. The spatial metric is a NR-Image Quality Assessment (IQA) algorithm itself based mostly on features extracted by existing NR-IQA metrics described in the literature, which have been chosen in such a way as to provide complementary information. These features are combined with three novel features which capture quantisation noise using Support Vector Regression (SVR), to result in objective scores which correlate highly with subjective data when evaluated on two of the largest and well-known image databases. For VQA, the spatial metric is computed on all frames and two scores are derived to represent the quality over the entire video duration. These are combined with 21 temporal features obtained by fitting a Generalised Gaussian Distribution (GGD) to frame differences to capture deviations from regular statistical properties of pristine videos arising due to distortions. All features are trained using SVR to culminate in an NR-VQA metric. It is demonstrated that, excluding MPEG-2-coded videos which are nowadays losing importance, the proposed metric is statistically identical to even state-of-the-art FR and RR metrics proposed in the literature. |
Description: | M.SC.ICT COMMS&COMPUTER ENG. |
URI: | https://www.um.edu.mt/library/oar/handle/123456789/78293 |
Appears in Collections: | Dissertations - FacICT - 2014 Dissertations - FacICTCCE - 2014 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
M.SC.ICT_Galea_Christian_2014.pdf Restricted Access | 18.01 MB | Adobe PDF | View/Open Request a copy |
Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.