Please use this identifier to cite or link to this item: https://www.um.edu.mt/library/oar/handle/123456789/104597
Title: Pre-training data quality and quantity for a low-resource language : new Corpus and BERT models for Maltese
Authors: Micallef, Kurt
Gatt, Albert
Tanti, Marc
van der Plas, Lonneke
Borg, Claudia
Keywords: Artificial intelligence
Natural language processing (Computer science)
Semantics
Issue Date: 2022
Publisher: Association for Computational Linguistics
Citation: Micallef, K., Gatt, A., Tanti, M., van der Plas, L., & Borg, C. (2022). Pre-training data quality and quantity for a low-resource language : new Corpus and BERT models for Maltese. Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing, Virtual conference.
Abstract: Multilingual language models such as mBERT have seen impressive cross-lingual transfer to a variety of languages, but many languages remain excluded from these models. In this paper, we analyse the effect of pre-training with monolingual data for a low-resource language that is not included in mBERT – Maltese – with a range of pre-training set ups. We conduct evaluations with the newly pretrained models on three morphosyntactic tasks – dependency parsing, part-of-speech tagging, and named-entity recognition – and one semantic classification task – sentiment analysis. We also present a newly created corpus for Maltese, and determine the effect that the pre-training data size and domain have on the downstream performance. Our results show that using a mixture of pre-training domains is often superior to using Wikipedia text only. We also find that a fraction of this corpus is enough to make significant leaps in performance over Wikipedia-trained models. We pre-train and compare two models on the new corpus: a monolingual BERT model trained from scratch (BERTu), and a further pretrained multilingual BERT (mBERTu). The models achieve state-of-the-art performance on these tasks, despite the new corpus being considerably smaller than typically used corpora for high-resourced languages. On average, BERTu outperforms or performs competitively with mBERTu, and the largest gains are observed for higher-level tasks
URI: https://www.um.edu.mt/library/oar/handle/123456789/104597
Appears in Collections:Scholarly Works - InsLin

Files in This Item:
File Description SizeFormat 
Pre_training_data_quality_and_quantity_for_a_low_resource_language_new_Corpus_and_BERT_models_for_Maltese_2022.pdf
  Restricted Access
245.44 kBAdobe PDFView/Open Request a copy


Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.