NOVOTNÝ, Vít, Eniafe Festus AYETIRAN, Dalibor BAČOVSKÝ, Dávid LUPTÁK, Michal ŠTEFÁNIK and Petr SOJKA. One Size Does Not Fit All: Finding the Optimal Subword Sizes for FastText Models across Languages. In Mitkov, Ruslan and Angelova, Galia. Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021). Varna, Bulgaria: INCOMA Ltd., 2021, p. 1068-1074. ISBN 978-954-452-072-4. Available from: https://dx.doi.org/10.26615/978-954-452-072-4_120.
Other formats:   BibTeX LaTeX RIS
Basic information
Original name One Size Does Not Fit All: Finding the Optimal Subword Sizes for FastText Models across Languages
Authors NOVOTNÝ, Vít (203 Czech Republic, guarantor, belonging to the institution), Eniafe Festus AYETIRAN (566 Nigeria, belonging to the institution), Dalibor BAČOVSKÝ (203 Czech Republic, belonging to the institution), Dávid LUPTÁK (703 Slovakia, belonging to the institution), Michal ŠTEFÁNIK (703 Slovakia, belonging to the institution) and Petr SOJKA (203 Czech Republic, belonging to the institution).
Edition Varna, Bulgaria, Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), p. 1068-1074, 7 pp. 2021.
Publisher INCOMA Ltd.
Other information
Original language English
Type of outcome Proceedings paper
Field of Study 60203 Linguistics
Country of publisher Bulgaria
Confidentiality degree is not subject to a state or trade secret
Publication form printed version "print"
WWW DOI preprint sborníku preprint
RIV identification code RIV/00216224:14330/21:00122017
Organization unit Faculty of Informatics
ISBN 978-954-452-072-4
ISSN 1313-8502
Doi http://dx.doi.org/10.26615/978-954-452-072-4_120
Keywords (in Czech) fastText; učení reprezentace; slovní analogie; optimalizace hyperparametrů; modelování jazyka; vzdálenost jazyků
Keywords in English fastText; representation learning; word analogy; hyperparameter optimization; language modeling; language distance
Tags firank_B, language modeling, machine learning, similarity search
Tags International impact, Reviewed
Changed by Changed by: RNDr. Pavel Šmerk, Ph.D., učo 3880. Changed: 23/5/2022 14:55.
Abstract
Unsupervised representation learning of words from large multilingual corpora is useful for downstream tasks such as word sense disambiguation, semantic text similarity, and information retrieval. The representation precision of log-bilinear fastText models is mostly due to their use of subword information. In previous work, the optimization of fastText's subword sizes has not been fully explored, and non-English fastText models were trained using subword sizes optimized for English and German word analogy tasks. In our work, we find the optimal subword sizes on the English, German, Czech, Italian, Spanish, French, Hindi, Turkish, and Russian word analogy tasks. We then propose a simple n-gram coverage model and we show that it predicts better-than-default subword sizes on the Spanish, French, Hindi, Turkish, and Russian word analogy tasks. We show that the optimization of fastText's subword sizes matters and results in a 14% improvement on the Czech word analogy task. We also show that expensive parameter optimization can be replaced by a simple n-gram coverage model that consistently improves the accuracy of fastText models on the word analogy tasks by up to 3% compared to the default subword sizes, and that it is within 1% accuracy of the optimal subword sizes.
Links
MUNI/A/1573/2020, interní kód MUName: Aplikovaný výzkum: vyhledávání, analýza a vizualizace rozsáhlých dat, zpracování přirozeného jazyka, umělá inteligence pro analýzu biomedicínských obrazů.
Investor: Masaryk University
PrintDisplayed: 1/5/2024 11:26