Detailed Information on Publication Record
2022
Combining Sparse and Dense Information Retrieval: Soft Vector Space Model and MathBERTa at ARQMath-3 Task 1 (Answer Retrieval)
NOVOTNÝ, Vít and Michal ŠTEFÁNIKBasic information
Original name
Combining Sparse and Dense Information Retrieval: Soft Vector Space Model and MathBERTa at ARQMath-3 Task 1 (Answer Retrieval)
Authors
NOVOTNÝ, Vít (203 Czech Republic, guarantor, belonging to the institution) and Michal ŠTEFÁNIK (703 Slovakia, belonging to the institution)
Edition
Bologna, Proceedings of the Working Notes of CLEF 2022 - Conference and Labs of the Evaluation Forum, p. 104-118, 15 pp. 2022
Publisher
CEUR-WS
Other information
Language
English
Type of outcome
Stať ve sborníku
Field of Study
10201 Computer sciences, information science, bioinformatics
Country of publisher
Italy
Confidentiality degree
není předmětem státního či obchodního tajemství
Publication form
electronic version available online
References:
RIV identification code
RIV/00216224:14330/22:00126431
Organization unit
Faculty of Informatics
ISSN
Keywords in English
information retrieval; sparse retrieval; dense retrieval; soft vector space model; math representations; word embeddings; constrained positional weighting; decontextualization; word2vec; transformers
Tags
Tags
International impact, Reviewed
Změněno: 6/4/2023 09:35, RNDr. Pavel Šmerk, Ph.D.
Abstract
V originále
Sparse retrieval techniques can detect exact matches, but are inadequate for mathematical texts, where the same information can be expressed as either text or math. The soft vector space model has been shown to improve sparse retrieval on semantic text similarity, text classification, and machine translation evaluation tasks, but it has not yet been properly evaluated on math information retrieval. In our work, we compare the soft vector space model against standard sparse retrieval baselines and state-of-the-art math information retrieval systems from Task 1 (Answer Retrieval) of the ARQMath-3 lab. We evaluate the impact of different math representations, different notions of similarity between key words and math symbols ranging from Levenshtein distances to deep neural language models, and different ways of combining text and math. We show that using the soft vector space model consistently improves effectiveness compared to using standard sparse retrieval techniques. We also show that the Tangent-L math representation achieves better effectiveness than LaTeX, and that modeling text and math separately using two models improves effectiveness compared to jointly modeling text and math using a single model. Lastly, we show that different math representations and different ways of combining text and math benefit from different notions of similarity between tokens. Our best system achieves NDCG' of 0.251 on Task 1 of the ARQMath-3 lab.