Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/85808
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAquilina, Matthew-
dc.contributor.authorGalea, Christian-
dc.contributor.authorAbela, John-
dc.contributor.authorCamilleri, Kenneth P.-
dc.contributor.authorFarrugia, Reuben A.-
dc.date.accessioned2021-12-20T10:47:40Z-
dc.date.available2021-12-20T10:47:40Z-
dc.date.issued2021-
dc.identifier.citationAquilina, M., Galea, C., Abela, J., Camilleri, K. P., & Farrugia, R. A. (2021). Improving super-resolution performance using meta-attention layers. IEEE Signal Processing Letters, 28, 2082-2086.en_GB
dc.identifier.urihttps://www.um.edu.mt/library/oar/handle/123456789/85808-
dc.description.abstractConvolutional Neural Networks (CNNs) have achieved impressive results across many super-resolution (SR) and image restoration tasks. While many such networks can upscalelow-resolution (LR) images using just the raw pixel-level information, the ill-posed nature of SR can make it difficult to accurately super-resolve an image which has undergone multiple different degradations. Additional information (metadata) describing the degradation process (such as the blur kernel applied, compression level, etc.) can guide networks to super-resolve LR images with higher fidelity to the original source. Previous attempts at informing SR networks with degradation parameters have indeed been able to improve performance in a number of scenarios. However, due to the fully-convolutional nature of many SR networks, most of these metadata fusion methods either require a complete architectural change, or necessitate the addition of significant extra complexity. Thus, these approaches are difficult to introduce into arbitrary SR networks without considerable design alterations. In this letter, we introduce meta-attention, a simple mechanism which allows any SR CNN to exploit the information available in relevant degradation parameters. The mechanism functions by translating the metadata into a channel attention vector, which in turn selectively modulates the network's feature maps. Incorporating meta-attention into SR networks is straightforward, as it requires no specific type of architecture to function correctly. Extensive testing has shown that meta-attention can consistently improve the pixel-level accuracy of state-of-the-art (SOTA) networks when provided with relevant degradation metadata. Despite average memory/runtime overheads of less than $\approx$ 2.6%/0.025 seconds for the datasets and models considered, meta-attention improves the performance for both PSNR and SSIM; for PSNR, the gain onblurred/downsampled (×4) images is of 0.2969 dB (on average) and 0.3320 dB for SOTA general and face SR models, respectively. The coding framework used for this letter is available at: https://github.com/um-dsrg/Super-Resolution-Meta-Attention-Networks .en_GB
dc.language.isoenen_GB
dc.publisherIEEEen_GB
dc.rightsinfo:eu-repo/semantics/restrictedAccessen_GB
dc.subjectImage processingen_GB
dc.subjectOptical data processingen_GB
dc.subjectComputer graphicsen_GB
dc.subjectPattern recognitionen_GB
dc.subjectImage reconstructionen_GB
dc.titleImproving super-resolution performance using meta-attention layersen_GB
dc.typearticleen_GB
dc.rights.holderThe copyright of this work belongs to the author(s)/publisher. The rights of this work are as defined by the appropriate Copyright Legislation or as modified by any successive legislation. Users may access this work and can make use of the information contained in accordance with the Copyright Legislation provided that the author must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the prior permission of the copyright holder.en_GB
dc.description.reviewedpeer-revieweden_GB
dc.identifier.doi10.1109/LSP.2021.3116518-
dc.publication.titleSignal Processing Lettersen_GB
Appears in Collections:Scholarly Works - FacICTCCE

Files in This Item:
File Description SizeFormat 
Improving_Super-Resolution_Performance_Using_Meta-Attention_Layers.pdf
  Restricted Access
1.32 MBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.