Speaker
Description
The provision of written corrective feedback on language learners' writing has been the focus of much investigation over the last three decades. Amongst this literature, there has been growing evidence that the provision of graduated feedback, which increases in explicitness as per learner needs in real time, can increase the learning potential of feedback on writing. However, the provision of such feedback has been limited to oral feedback due to its dialogical nature, with its implementation being reported as too time consuming for most real-life classrooms. This presentation introduces the ongoing development of DynaWrite (ver 1.0)—an online tool which automatically provides graduated feedback on learners’ writing, thus overcoming the aforementioned modality and time restraints. Furthermore, in addition to detecting grammatical errors, the tool also provides automated dynamic feedback on brevity, clarity, objectivity and formality as they pertain to the genre of scientific writing. The proposed web application relays the input of a user to a chatbot based on a large language model. The chatbot is configured to increase the explicitness of feedback when the user is unable to resolve errors. We first briefly describe the theoretical foundation of the tool and graduated feedback. This is followed by a description of the four levels of feedback the tool provides, with level 1 being the most implicit and level 4 being the most explicit. We then discuss the process of error categorization and the process of instructing the chatbot to reliably detect these errors. The presentation concludes with a demonstration of how the tool is used and how it can track learner progress with specific error categories.
Keywords | Large Language Models; Chatbots, Automated Written Corrective Feedback, Dynamic Assessment |
---|