Rutatola, EdgerEdgerRutatolaStroeken, KoenKoenStroekenBelpaeme, TonyTonyBelpaeme2026-01-262026-01-262026978-3-031-98280-40302-9743https://imec-publications.be/handle/20.500.12860/58720The advancement of Large Language Models (LLMs) has significantly enhanced intelligent tutoring systems, enabling them to engage learners through natural dialogues. This interaction boosts learner engagement but presents challenges for low-resource languages, such as Swahili – Tanzania’s national language. By design, LLMs rely on patterns learned during training to predict subsequent words, making them more suited for conversational tasks than factual computations and reasoning tasks, such as solving mathematics problems. This study investigates the suitability of GPT-4 in generating Swahili-language mathematics content for teaching geometry to primary school students, assessing both contextual and factual accuracy. Using nine varied prompts, we generated 621 different topic introductions, which were evaluated by primary school mathematics teachers. Results reveal that GPT-4 can generate contextually relevant content but struggles with complex mathematical computations. Additionally, the prompt variations provided valuable insights into designing effective prompts for similar tasks.engLeveraging Large Language Models for a Swahili Mathematics ITS in Tanzania: Designing Effective PromptsProceedings paper10.1007/978-3-031-98281-1_1WOS:001562419800001