2025
BenCzechMark : A Czech-Centric Multitask and Multimetric Benchmark for Large Language Models with Duel Scoring Mechanism
FAJCIK, Martin; Martin DOCEKAL; Jan DOLEZAL; Karel ONDREJ; Karel BENES et al.Základní údaje
Originální název
BenCzechMark : A Czech-Centric Multitask and Multimetric Benchmark for Large Language Models with Duel Scoring Mechanism
Autoři
FAJCIK, Martin; Martin DOCEKAL; Jan DOLEZAL; Karel ONDREJ; Karel BENES; Jan KAPSA; Pavel SMRZ; Alexander POLOK; Michal HRADIS; Zuzana NEVĚŘILOVÁ; Aleš HORÁK; Radoslav SABOL; Michal ŠTEFÁNIK; Adam JIRKOVSKY; David ADAMCZYK; Petr HYNER; Jan HULA a Hynek KYDLICEK
Vydání
TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, CAMBRIDGE, MIT PRESS, 2025, 2307-387X
Další údaje
Jazyk
angličtina
Typ výsledku
Článek v odborném periodiku
Obor
10201 Computer sciences, information science, bioinformatics
Stát vydavatele
Spojené státy
Utajení
není předmětem státního či obchodního tajemství
Odkazy
Impakt faktor
Impact factor: 6.900 v roce 2024
Označené pro přenos do RIV
Ano
Kód RIV
RIV/00216224:14330/25:00142533
Organizační jednotka
Fakulta informatiky
UT WoS
EID Scopus
Klíčová slova anglicky
Large language models; Czech; Benchmark
Změněno: 2. 4. 2026 15:46, RNDr. Pavel Šmerk, Ph.D.
Anotace
V originále
We present BenCzechMark (BCM), the first comprehensive Czech language benchmark designed for large language models, offering diverse tasks, multiple task formats, and multiple evaluation metrics. Its duel scoring system is grounded in statistical significance theory and uses aggregation across tasks inspired by social preference theory. Our benchmark encompasses 50 challenging tasks, with corresponding test datasets, primarily in native Czech, with 14 newly collected ones. These tasks span 8 categories and cover diverse domains, including historical Czech news, essays from pupils or language learners, and spoken word. Furthermore, we collect and clean BUT-Large Czech Collection, the largest publicly available clean Czech language corpus, and use it for (i) contamination analysis and (ii) continuous pretraining of the first Czech-centric 7B language model with Czech-specific tokenization. We use our model as a baseline for comparison with publicly available multilingual models. Lastly, we release and maintain a leaderboard with existing 50 model submissions, where new model submissions can be made at https://huggingface.co /spaces/CZLC/BenCzechMark.
Návaznosti
| LM2023062, projekt VaV |
| ||
| OSCARS-01-247, interní kód MU |
| ||
| 90254, velká výzkumná infrastruktura |
|