Please use this identifier to cite or link to this item:
https://www.um.edu.mt/library/oar/handle/123456789/16683
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Micallef, Brian W. | - |
dc.contributor.author | Debono, Carl James | - |
dc.contributor.author | Farrugia, Reuben A. | - |
dc.date.accessioned | 2017-02-21T18:15:58Z | - |
dc.date.available | 2017-02-21T18:15:58Z | - |
dc.date.issued | 2013 | - |
dc.identifier.citation | Micallef, B. W., Debono, C. J., & Farrugia, R. A. (2013). Low complexity disparity estimation for immersive 3D video transmission. IEEE International Conference on Communications Workshops (ICC), Budapest. 612-616. | en_GB |
dc.identifier.uri | https://www.um.edu.mt/library/oar//handle/123456789/16683 | - |
dc.description | This research work was partially funded by the Strategic Educational Pathways Scholarship Scheme (STEPS-Malta) and by European Union - European Social Fund (ESF 1.25). | en_GB |
dc.description.abstract | Bandwidth-limited channels demand the transmission of the per-pixel depth maps with the texture data to provide immersive 3D video services that allow arbitrary 3D viewpoint reconstruction. This auxiliary depth data offers geometric information, which together with the multi-view and epipolar geometries, can be exploited during 3D video coding to calculate geometric positions for the search areas of disparity estimation. These positions represent a more accurate estimate match to compensate the current macro-block from than those provided by the median adopted by the H.264/MVC standard. The result is smaller search areas that reduce the encoder's computational requirement. In this work, we exploit this fact together with the largest depth variation within the macro-block to encode, to calculate and adaptively adjust these areas along the epipolar lines. The proposed solution achieves a speedup gain of up-to 32 times over the original disparity estimation, with negligible influence on the rate-distortion performance of 3D video coding. This highly reduces the computational cost of the H.264/MVC encoder and eases its need to be implemented on highly expensive systems that are otherwise necessary to meets the stringent latency requirement of broadcast transmissions. Moreover, it provides similar coding efficiencies required for such scenarios. | en_GB |
dc.language.iso | en | en_GB |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | en_GB |
dc.rights | info:eu-repo/semantics/openAccess | en_GB |
dc.subject | Three-dimensional display systems | en_GB |
dc.subject | Image stabilization | en_GB |
dc.subject | 3-D video (Three-dimensional imaging) | en_GB |
dc.subject | Data transmission systems | en_GB |
dc.title | Low complexity disparity estimation for immersive 3D video transmission | en_GB |
dc.type | conferenceObject | en_GB |
dc.rights.holder | The copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder. | en_GB |
dc.bibliographicCitation.conferencename | IEEE International Conference on Communications Workshops (ICC) | en_GB |
dc.bibliographicCitation.conferenceplace | Budapest, Hungary, 9-13/06/2013 | en_GB |
dc.description.reviewed | peer-reviewed | en_GB |
dc.identifier.doi | 10.1109/ICCW.2013.6649306 | - |
Appears in Collections: | Scholarly Works - FacICTCCE |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
OA Conference paper - Low complexity disparity estimation for immersive 3D video transmission.2-6.pdf | Low complexity disparity estimation for immersive 3D video transmission | 470.66 kB | Adobe PDF | View/Open |
Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.