D 2022

Comparing RNN and Transformer Context Representations in the Czech Answer Selection Task

MEDVEĎ, Marek, Aleš HORÁK and Radoslav SABOL

Basic information

Original name

Comparing RNN and Transformer Context Representations in the Czech Answer Selection Task

Authors

MEDVEĎ, Marek (703 Slovakia, guarantor, belonging to the institution), Aleš HORÁK (203 Czech Republic) and Radoslav SABOL (703 Slovakia)

Edition

Portugal, Proceedings of the 14th International Conference on Agents and Artificial Intelligence (ICAART), p. 388-394, 7 pp. 2022

Publisher

SCITEPRESS

Other information

Language

English

Type of outcome

Stať ve sborníku

Field of Study

10200 1.2 Computer and information sciences

Country of publisher

Portugal

Confidentiality degree

není předmětem státního či obchodního tajemství

Publication form

electronic version available online

RIV identification code

RIV/00216224:14330/22:00125094

Organization unit

Faculty of Informatics

ISBN

978-989-758-547-0

UT WoS

000774776400046

Keywords in English

Question Answering; Answer Context; Answer Selection; Czech; Sentece Embeddings; RNN; BERT

Tags

International impact, Reviewed
Změněno: 14/5/2024 12:44, RNDr. Pavel Šmerk, Ph.D.

Abstract

V originále

Open domain question answering now inevitably builds upon advanced neural models processing large unstructured textual sources serving as a kind of underlying knowledge base. In case of non-mainstream highly- inflected languages, the state-of-the-art approaches lack large training datasets emphasizing the need for other improvement techniques. In this paper, we present detailed evaluation of a new technique employing various context representations in the answer selection task where the best answer sentence from a candidate document is identified as the most relevant to the human entered question. The input data here consists not only of each sentence in isolation but also of its preceding sentence(s) as the context. We compare seven different context representations including direct recurrent network (RNN) embeddings and several BERT-model based sentence embedding vectors. All experiments are evaluated with a new version 3.1 of the Czech question answering benchmark dataset SQAD wit h possible multiple correct answers as a new feature. The comparison shows that the BERT-based sentence embeddings are able to offer the best context representations reaching the mean average precision results of 83.39% which is a new best score for this dataset.

Links

LM2018101, research and development project
Name: Digitální výzkumná infrastruktura pro jazykové technologie, umění a humanitní vědy (Acronym: LINDAT/CLARIAH-CZ)
Investor: Ministry of Education, Youth and Sports of the CR
MUNI/A/1195/2021, interní kód MU
Name: Aplikovaný výzkum v oblastech vyhledávání, analýz a vizualizací rozsáhlých dat, zpracování přirozeného jazyka a aplikované umělé inteligence
Investor: Masaryk University