Other formats:
BibTeX
LaTeX
RIS
@inproceedings{2211178, author = {Novotný, Vít and Štefánik, Michal}, address = {Bologna}, booktitle = {Proceedings of the Working Notes of CLEF 2022 - Conference and Labs of the Evaluation Forum}, editor = {Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, Martin Potthast}, keywords = {information retrieval; sparse retrieval; dense retrieval; soft vector space model; math representations; word embeddings; constrained positional weighting; decontextualization; word2vec; transformers}, howpublished = {elektronická verze "online"}, language = {eng}, location = {Bologna}, pages = {104-118}, publisher = {CEUR-WS}, title = {Combining Sparse and Dense Information Retrieval: Soft Vector Space Model and MathBERTa at ARQMath-3 Task 1 (Answer Retrieval)}, url = {http://ceur-ws.org/Vol-3180/paper-06.pdf}, year = {2022} }
TY - JOUR ID - 2211178 AU - Novotný, Vít - Štefánik, Michal PY - 2022 TI - Combining Sparse and Dense Information Retrieval: Soft Vector Space Model and MathBERTa at ARQMath-3 Task 1 (Answer Retrieval) PB - CEUR-WS CY - Bologna KW - information retrieval KW - sparse retrieval KW - dense retrieval KW - soft vector space model KW - math representations KW - word embeddings KW - constrained positional weighting KW - decontextualization KW - word2vec KW - transformers UR - http://ceur-ws.org/Vol-3180/paper-06.pdf N2 - Sparse retrieval techniques can detect exact matches, but are inadequate for mathematical texts, where the same information can be expressed as either text or math. The soft vector space model has been shown to improve sparse retrieval on semantic text similarity, text classification, and machine translation evaluation tasks, but it has not yet been properly evaluated on math information retrieval. In our work, we compare the soft vector space model against standard sparse retrieval baselines and state-of-the-art math information retrieval systems from Task 1 (Answer Retrieval) of the ARQMath-3 lab. We evaluate the impact of different math representations, different notions of similarity between key words and math symbols ranging from Levenshtein distances to deep neural language models, and different ways of combining text and math. We show that using the soft vector space model consistently improves effectiveness compared to using standard sparse retrieval techniques. We also show that the Tangent-L math representation achieves better effectiveness than LaTeX, and that modeling text and math separately using two models improves effectiveness compared to jointly modeling text and math using a single model. Lastly, we show that different math representations and different ways of combining text and math benefit from different notions of similarity between tokens. Our best system achieves NDCG' of 0.251 on Task 1 of the ARQMath-3 lab. ER -
NOVOTNÝ, Vít and Michal ŠTEFÁNIK. Combining Sparse and Dense Information Retrieval: Soft Vector Space Model and MathBERTa at ARQMath-3 Task 1 (Answer Retrieval). Online. In Guglielmo Faggioli, Nicola Ferro, Allan Hanbury, Martin Potthast. \textit{Proceedings of the Working Notes of CLEF 2022 - Conference and Labs of the Evaluation Forum}. Bologna: CEUR-WS, 2022, p.~104-118. ISSN~1613-0073.
|