Speaker
Description
This study examines grammatical accuracy development in Korean engineering graduate students’ academic writing over a university term. Students received peer, instructor, and large language model (LLM) feedback. Error analysis revealed significant weekly declines in overall errors and specific types (e.g., tense, determiners, punctuation). Correlations linked feedback comments to accuracy gains. Combining human and LLM feedback may support measurable improvement in grammatical accuracy, highlighting the value of blended feedback approaches in academic contexts.
Summary
This presentation examines how combining instructor, peer, and LLM-generated written corrective feedback supports grammatical accuracy development in L2 writing. Using a mixed-methods study with weekly LLM grammar checks from ChatGPT, the research tracks error reduction over a university term. Findings suggest LLM feedback, alongside human feedback, fosters self-regulated learning, reduces grammar errors significantly and offers valid, reliable, and scalable grammar support for learners and educators.
| Teaching Context | College and university education |
|---|