LODOVICO MOLINA, Ivo, Valdemar ŠVÁBENSKÝ, Tsubasa MINEMATSU, Li CHEN, Fumiya OKUBO and Atsushi SHIMADA. Comparison of Large Language Models for Generating Contextually Relevant Questions. In Proceedings of the 19th European Conference on Technology Enhanced Learning (ECTEL). 2024.
Other formats:   BibTeX LaTeX RIS
Basic information
Original name Comparison of Large Language Models for Generating Contextually Relevant Questions
Authors LODOVICO MOLINA, Ivo, Valdemar ŠVÁBENSKÝ, Tsubasa MINEMATSU, Li CHEN, Fumiya OKUBO and Atsushi SHIMADA.
Edition Proceedings of the 19th European Conference on Technology Enhanced Learning (ECTEL), 2024.
Other information
Original language English
Type of outcome Conference abstract
Field of Study 10201 Computer sciences, information science, bioinformatics
Confidentiality degree is not subject to a state or trade secret
WWW ArXiv.org preprint
Keywords in English Generative AI; Question Generation; AI in Education
Tags International impact, Reviewed
Changed by Changed by: RNDr. Valdemar Švábenský, Ph.D., učo 395868. Changed: 19/8/2024 03:03.
Abstract
This study explores the effectiveness of Large Language Models (LLMs) for Automatic Question Generation in educational settings. Three LLMs are compared in their ability to create questions from university slide text without fine-tuning. Questions were obtained in a two-step pipeline: first, answer phrases were extracted from slides using Llama 2-Chat 13B; then, the three models generated questions for each answer. To analyze whether the questions would be suitable in educational applications for students, a survey was conducted with 46 students who evaluated a total of 246 questions across five metrics: clarity, relevance, difficulty, slide relation, and question-answer alignment. Results indicate that GPT-3.5 and Llama 2-Chat 13B outperform Flan T5 XXL by a small margin, particularly in terms of clarity and question-answer alignment. GPT-3.5 especially excels at tailoring questions to match the input answers. The contribution of this research is the analysis of the capacity of LLMs for Automatic Question Generation in education.
PrintDisplayed: 22/8/2024 09:43