Back to Search View Original Cite This Article

Abstract

<jats:p>The research aims to determine which types of prompts yield the most methodologically suitable English language learning tasks for university students with a B1 proficiency level when working with ChatGPT. The article examines the use of the chatbot for generating grammar exercises on the topic of “Past Simple vs. Present Perfect”. The study compares generation results based on open, closed, few-shot (prompt with an example), and role-based prompts. The generated tasks were evaluated based on their compliance with specified parameters, methodological completeness, complexity level, and suitability for inclusion in a lesson plan. The scientific novelty of the research lies in the specification of prompt parameters that directly enhance the methodological suitability of generated tasks: indicating the students’ level, exercise format, thematic context, mandatory linguistic elements, and the inclusion of an answer key and brief explanations. It was established that closed, role-based, and few-shot prompts are the most methodologically valuable, whereas open prompts more frequently require additional selection and editing. The findings allow for the recommendation of combining prompt types and employing iterative query refinement as an effective strategy for preparing instructional materials.</jats:p>

Show More

Keywords

prompts tasks level prompt research

Related Articles

PORE

About

Connect