Publication:

Distractor Generation for Multiple-Choice Questions with Predictive Prompting and Large Language Models

 
dc.contributor.authorKiros Bitew, Semere
dc.contributor.authorDeleu, Johannes
dc.contributor.authorDevelder, Chris
dc.contributor.authorDemeester, Thomas
dc.contributor.imecauthorBitew, Semere Kiros
dc.contributor.imecauthorDeleu, Johannes
dc.contributor.imecauthorDevelder, Chris
dc.contributor.imecauthorDemeester, Thomas
dc.contributor.orcidimecDeleu, Johannes::0000-0001-8277-2415
dc.contributor.orcidimecDevelder, Chris::0000-0003-2707-4176
dc.contributor.orcidimecDemeester, Thomas::0000-0002-9901-5768
dc.date.accessioned2025-04-14T10:38:29Z
dc.date.available2025-04-13T04:30:51Z
dc.date.available2025-04-14T10:38:29Z
dc.date.issued2025
dc.description.abstractLarge Language Models (LLMs) such as ChatGPT have demonstrated remarkable performance across various tasks and have garnered significant attention from both researchers and practitioners. However, in an educational context, we still observe a performance gap in generating distractors—i.e., plausible yet incorrect answers—with LLMs for multiple-choice questions (MCQs). In this study, we propose a strategy for guiding LLMs such as ChatGPT, in generating relevant distractors by prompting them with question items automatically retrieved from a question bank as well-chosen in-context examples. We evaluate our LLM-based solutions using a quantitative assessment on an existing test set, as well as through quality annotations by human experts, i.e., teachers. We found that on average 53% of the generated distractors presented to the teachers were rated as high-quality , i.e., suitable for immediate use as is, outperforming the state-of-the-art model. We also show the gains of our approach (https://github.com/semerekiros/distractGPT/ ) in generating high-quality distractors by comparing it with a zero-shot ChatGPT and a few-shot ChatGPT prompted with static examples.
dc.identifier.doi10.1007/978-3-031-74627-7_4
dc.identifier.eisbn978-3-031-74627-7
dc.identifier.isbn978-3-031-74626-0
dc.identifier.issn1865-0929
dc.identifier.urihttps://imec-publications.be/handle/20.500.12860/45528
dc.publisherSPRINGER INTERNATIONAL PUBLISHING AG
dc.source.beginpage48
dc.source.conference8th European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases
dc.source.conferencedate2023-09-18
dc.source.conferencelocationTurin
dc.source.endpage63
dc.source.journalInternational Workshops of ECML PKDD 2023
dc.source.numberofpages16
dc.subject.keywordsTESTS
dc.title

Distractor Generation for Multiple-Choice Questions with Predictive Prompting and Large Language Models

dc.typeProceedings paper
dspace.entity.typePublication
Files

Original bundle

Name:
8375_acc.pdf
Size:
308.04 KB
Format:
Adobe Portable Document Format
Description:
Accepted
Name:
8375.pdf
Size:
28.58 MB
Format:
Adobe Portable Document Format
Description:
Published
Publication available in collections: