Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/120149
Title: Inter/intra-observer agreement in video-capsule endoscopy : are we getting it all wrong? A systematic review and meta-analysis
Authors: Cortegoso Valdivia, Pablo
Deding, Ulrik
Bjørsum-Meyer, Thomas
Baatrup, Gunnar
Fernández-Urién, Ignacio
Dray, Xavier
Boal-Carvalho, Pedro
Ellul, Pierre
Toth, Ervin
Rondonotti, Emanuele
Kaalby, Lasse
Pennazio, Marco
Koulaouzidis, Anastasios
Authors: International CApsule endoscopy REsearch (I-CARE) Group
Keywords: Capsule endoscopy
Gastrointestinal system -- Examination
Gastrointestinal hemorrhage
Colon (Anatomy) -- Examination
Systematic reviews (Medical research)
Meta-analysis
Issue Date: 2022
Publisher: MDPI AG
Citation: Cortegoso Valdivia, P., Deding, U., Bjørsum-Meyer, T., Baatrup, G., Fernández-Urién, I., Dray, X.,...Koulaouzidis, A. (2022). Inter/intra-observer agreement in video-capsule endoscopy: are we getting it all wrong? a systematic review and meta-analysis. Diagnostics, 12(10), 2400.
Abstract: Video-capsule endoscopy (VCE) reading is a time- and energy-consuming task. Agreement on findings between readers (either different or the same) is a crucial point for increasing performance and providing valid reports. The aim of this systematic review with meta-analysis is to provide an evaluation of inter/intra-observer agreement in VCE reading. A systematic literature search in PubMed, Embase and Web of Science was performed throughout September 2022. The degree of observer agreement, expressed with different test statistics, was extracted. As different statistics are not directly comparable, our analyses were stratified by type of test statistics, dividing them in groups of “None/Poor/Minimal”, “Moderate/Weak/Fair”, “Good/Excellent/Strong” and “Perfect/Almost perfect” to report the proportions of each. In total, 60 studies were included in the analysis, with a total of 579 comparisons. The quality of included studies, assessed with the MINORS score, was sufficient in 52/60 studies. The most common test statistics were the Kappa statistics for categorical outcomes (424 comparisons) and the intra-class correlation coefficient (ICC) for continuous outcomes (73 comparisons). In the overall comparison of inter-observer agreement, only 23% were evaluated as “good” or “perfect”; for intra-observer agreement, this was the case in 36%. Sources of heterogeneity (high, I2 81.8–98.1%) were investigated with meta-regressions, showing a possible role of country, capsule type and year of publication in Kappa inter-observer agreement. VCE reading suffers from substantial heterogeneity and sub-optimal agreement in both inter- and intra-observer evaluation. Artificial-intelligence-based tools and the adoption of a unified terminology may progressively enhance levels of agreement in VCE reading.
URI: https://www.um.edu.mt/library/oar/handle/123456789/120149
Appears in Collections:Scholarly Works - FacM&SMed



Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.