D 2023

Does Size Matter? - Comparing Evaluation Dataset Size for the Bilingual Lexicon Induction

DENISOVÁ, Michaela and Pavel RYCHLÝ

Basic information

Original name

Does Size Matter? - Comparing Evaluation Dataset Size for the Bilingual Lexicon Induction

Authors

DENISOVÁ, Michaela (703 Slovakia, belonging to the institution) and Pavel RYCHLÝ (203 Czech Republic, belonging to the institution)

Edition

Karlova Studánka, Proceedings of the Seventeenth Workshop on Recent Advances in Slavonic Natural Languages Processing, RASLAN 2023, p. 47-56, 10 pp. 2023

Publisher

Tribun EU

Other information

Language

English

Type of outcome

Stať ve sborníku

Field of Study

10201 Computer sciences, information science, bioinformatics

Country of publisher

Czech Republic

Confidentiality degree

není předmětem státního či obchodního tajemství

Publication form

printed version "print"

RIV identification code

RIV/00216224:14330/23:00133036

Organization unit

Faculty of Informatics

ISBN

978-80-263-1793-7

ISSN

Keywords in English

Cross-lingual word embeddings; Bilingual lexicon induction; Evaluation dataset’s size
Změněno: 8/4/2024 17:16, RNDr. Pavel Šmerk, Ph.D.

Abstract

V originále

Cross-lingual word embeddings have been a popular approach for inducing bilingual lexicons. However, the evaluation of this task varies from paper to paper, and gold standard dictionaries used for the evaluation are frequently criticised for occurring mistakes. Although there have been efforts to unify the evaluation and gold standard dictionaries, we propose a new property that should be considered when compiling an evaluation dataset: size. In this paper, we evaluate three baseline models on three diverse language pairs (Estonian-Slovak, Czech-Slovak, English-Korean) and experiment with evaluation datasets of various sizes: 200, 500, 1.5K, and 3K source words. Moreover, we compare the results with manual error analysis. In this experiment, we show whether the size of an evaluation dataset impacts the results and how to select the ideal evaluation dataset size. We make our code and datasets publicly available.