Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/16846
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSilva Cruz, Luis A. da-
dc.contributor.authorCordina, Mario-
dc.contributor.authorDebono, Carl James-
dc.contributor.authorAmado Assuncao, Pedro A.-
dc.date.accessioned2017-02-26T14:39:45Z-
dc.date.available2017-02-26T14:39:45Z-
dc.date.issued2016-12-
dc.identifier.citationda Silva Cruz, Cordina, M., L. A., Debono, C. J., & Amado Assuncao, P. A. (2016). Quality monitor for 3-D video over hybrid broadcast networks. IEEE Transactions on Broadcasting, 62(4), 785-799.en_GB
dc.identifier.urihttps://www.um.edu.mt/library/oar//handle/123456789/16846-
dc.description.abstractHybrid broadcast networks are particularly envisaged to merge broadcast TV with broadband Internet and to act as a key enabler for new and better video services in the near future. This is expected to contribute for the evolution of 3-D and multiview video services due to the inherent diversity of its coded data, comprising several complementary streams. Using the multiview video-plus-depth format, at least two independent streams may be delivered through different channels over a hybrid network, that is, broadcasting backward compatible 2-D video in one channel and delivering its corresponding depth stream through complementary channels like LTE-based broadband Internet accesses. This article addresses the problem of monitoring the quality of 3-D video (color plus depth) delivered in such hybrid networking environments, proposing a novel scheme to estimate the impact of visual quality degradation resulting from packet losses in the broadband Internet carrying only the depth stream, without relying on the texture component of the video or any other reference data. A novel non-reference (NR) approach is described, operating as a cascade of two estimators, using only header information of the packets carrying the depth stream through IP broadband. The two-stage cascaded estimator comprises an NR packet-layer model based on an artificial neural network followed by a logistic model, with each stage outputting a separate quality estimate. Performance evaluations, done by comparing the actual and estimated scores for the structural similarity index and subjective differential mean-opinion score, reveals high accuracy for both of these estimates, with Pearson linear correlation coefficient values greater than 0.89. Since only packet-layer information is used, the algorithmic complexity of this monitoring tool is low, making it suitable for standalone implementation at arbitrary network nodes.en_GB
dc.language.isoenen_GB
dc.publisherInstitute of Electrical and Electronics Engineers Inc.en_GB
dc.rightsinfo:eu-repo/semantics/restrictedAccessen_GB
dc.subjectThree-dimensional display systemsen_GB
dc.subjectStreaming videoen_GB
dc.subjectMultimedia communicationsen_GB
dc.subjectBroadcast data systemsen_GB
dc.subjectNeural networks (Computer science)en_GB
dc.subjectQuality of service (Computer networks)en_GB
dc.titleQuality monitor for 3-D video over hybrid broadcast networksen_GB
dc.typearticleen_GB
dc.rights.holderThe copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder.en_GB
dc.description.reviewedpeer-revieweden_GB
dc.identifier.doi10.1109/TBC.2016.2617278-
Appears in Collections:Scholarly Works - FacICTCCE

Files in This Item:
File Description SizeFormat 
Quality Monitor for 3-D Video Over Hybrid Broadcast Networks.pdf
  Restricted Access
Quality monitor for 3-D video over hybrid broadcast networks2.12 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.