D 2022

Combining Sparse and Dense Information Retrieval: Soft Vector Space Model and MathBERTa at ARQMath-3 Task 1 (Answer Retrieval)

NOVOTNÝ, Vít a Michal ŠTEFÁNIK

Základní údaje

Originální název

Combining Sparse and Dense Information Retrieval: Soft Vector Space Model and MathBERTa at ARQMath-3 Task 1 (Answer Retrieval)

Autoři

NOVOTNÝ, Vít (203 Česká republika, garant, domácí) a Michal ŠTEFÁNIK (703 Slovensko, domácí)

Vydání

Bologna, Proceedings of the Working Notes of CLEF 2022 - Conference and Labs of the Evaluation Forum, od s. 104-118, 15 s. 2022

Nakladatel

CEUR-WS

Další údaje

Jazyk

angličtina

Typ výsledku

Stať ve sborníku

Obor

10201 Computer sciences, information science, bioinformatics

Stát vydavatele

Itálie

Utajení

není předmětem státního či obchodního tajemství

Forma vydání

elektronická verze "online"

Odkazy

Kód RIV

RIV/00216224:14330/22:00126431

Organizační jednotka

Fakulta informatiky

ISSN

Klíčová slova anglicky

information retrieval; sparse retrieval; dense retrieval; soft vector space model; math representations; word embeddings; constrained positional weighting; decontextualization; word2vec; transformers

Příznaky

Mezinárodní význam, Recenzováno
Změněno: 6. 4. 2023 09:35, RNDr. Pavel Šmerk, Ph.D.

Anotace

V originále

Sparse retrieval techniques can detect exact matches, but are inadequate for mathematical texts, where the same information can be expressed as either text or math. The soft vector space model has been shown to improve sparse retrieval on semantic text similarity, text classification, and machine translation evaluation tasks, but it has not yet been properly evaluated on math information retrieval. In our work, we compare the soft vector space model against standard sparse retrieval baselines and state-of-the-art math information retrieval systems from Task 1 (Answer Retrieval) of the ARQMath-3 lab. We evaluate the impact of different math representations, different notions of similarity between key words and math symbols ranging from Levenshtein distances to deep neural language models, and different ways of combining text and math. We show that using the soft vector space model consistently improves effectiveness compared to using standard sparse retrieval techniques. We also show that the Tangent-L math representation achieves better effectiveness than LaTeX, and that modeling text and math separately using two models improves effectiveness compared to jointly modeling text and math using a single model. Lastly, we show that different math representations and different ways of combining text and math benefit from different notions of similarity between tokens. Our best system achieves NDCG' of 0.251 on Task 1 of the ARQMath-3 lab.