Artificial intelligence and machine learning have been around for a long time, so what is behind the enormous amount of attention and capital now being directed towards large language models like OpenAI’s GPT-4 and Google Gemini? This workshop will present some of the historical and technical background on these remarkable systems, including why neural networks were essentially ignored for decades by most AI researchers, how they finally started producing good results, and what discoveries have led to the most recent improvements. We will then discuss some of the surprising attributes of these systems, including the fact that the models do not contain copies of their training data and that designers themselves have no way of knowing what the models have learned. Finally, we will discuss what may lie ahead, including why many researchers believe we may now be on the path to artificial general intelligence and eventually to superintelligent machines. Practical elements of the workshop will include an introduction to what models are available for use now, what they are capable of, and tips for prompting these models to produce desirable output. Participants are encouraged to bring a digital device.
In this workshop, each practitioner will involve participants in exploring approaches and activities that bring together Pragmatics, Generative AI, and Language Classroom Practices.
Yukie Saito will: (1) Explain the challenges Japanese EFL students face in mastering pragmatically appropriate conversations using standard ELT textbooks; (2) introduce ChatGPT as an innovative tool to bridge this gap, focusing on conversational aspects typically difficult for Japanese learners, such as leave-taking, making, accepting, and refusing invitations, as well as giving and receiving compliments; and (3) Explain how we can create conversations involving different participants in various contexts.
Participants will also have the opportunity to try to make conversations using ChatGPT.
Bradford J. Lee will (1) discuss common pitfalls in prompt composition and provide specific guidelines to get the most accurate responses from ChatGPT as a pragmatic instruction tool. (Generative AI have the potential to serve as speaking partners for students outside the classroom in role play activities or discourse-completion tasks (DCT). We analyzed the suitability, variability, and pragmatic strategies that ChatGPT 3.5 employed under a range of scenarios. Its responses were generally pragmatically-suitable, though highly dependent on the amount of context provided. ChatGPT also displayed limited strategic variety and sometimes made inaccurate assumptions.)
Jim Ronald
Situation-specific or speech act-specific “conversations” generated by ChatGPT are quick and easy to produce and, while they may contain factual or logical inaccuracies, they typically use English that is both grammatically correct and pragmatically appropriate. As such, they provide much that language textbooks and classrooms lack. Both as a consequence of their accuracy and their faults, these “conversations” provide opportunities for promoting critical thinking skills, pragmatic awareness, and fostering skillful users of the target language. In my part of the workshop, we will explore ways that this may be achieved.
In the rapidly evolving domain of language education, Text to Speech (TTS) technology has become a transformative tool. Today's TTS systems offer near-native naturalness in voice quality, thanks to advancements in artificial intelligence. These improvements have not only enhanced the clarity and fluency of generated speech but also its authenticity, making TTS an invaluable asset in language learning contexts. Furthermore, the integration of new Speech Synthesis Markup Language (SSML) tags has expanded the functionality of TTS technologies. These tags allow for the manipulation of speech nuances, enabling the creation of diverse linguistic characters and incorporating multiple languages that were previously inaccessible. This presentation will demonstrate how teachers can leverage these tools effectively, with practical demonstrations of TTS in both English and Japanese. It will also explore the author's integration of TTS into other learning tools, such as Extensive Reading and Japanese reading fluency. Attendees will gain insights into creating more engaging and effective language learning experiences this powerful tool.
A growing body of research in computer-assisted language learning provides evidence that certain kinds of digital games can effectively facilitate second language acquisition. The gamification of learning tasks has attracted much attention in the field over recent years, but there is also ample evidence to suggest that certain digital games that were designed purely for entertainment purposes can serve as effective vehicles for language learning. Learners tend to find such games more engaging than their made-for-education counterparts and these games can be integrated into foreign language classroom activities, made available to students in self-access learning environments, or recommended as effective learning tools to supplement in-class activities.
In this workshop, I will walk participants through an example case of game-based language teaching by providing a step-by-step explanation of how I integrated the cooperative puzzle game Keep Talking and Nobody Explodes (Steel Crate Games, 2015) into an intermediate-level English language course aimed at developing speaking and listening skills. I will also briefly discuss how this game and others that employ similar play dynamics can facilitate language learning, with supporting empirical findings from three studies analysing the language learning opportunities afforded by this game as well as the beliefs and attitudes of learners towards digital game-based language learning more generally. Participants will also have the chance to play the game for themselves during the workshop (all the necessary devices and materials will be provided) and to discuss their impressions and their thoughts on digital game-based learning in an EFL context.
The registration desk will be open from early Saturday morning to accommodate everyone. Be ready to collect your name card and lanyard before moving quickly to the presentations and the Associate Member area in the Global Plaza.
This forum will examine how advancements in artificial intelligence and new technologies can aid or impede learner development. The first presenter will explore how Japanese university students can improve their writing skills by using AI (such as ChatGPT) with a single prompt. The students in the study wrote a series of passages on pen and paper, in class over a number of weeks. They then corrected their spelling and grammar using a simple AI prompt before making notes of their most obvious/common errors. The second presenter will focus on how he has evolved the grading schemes of writing assignments to keep students focused on their own abilities to produce English in presentations. The third presenter will examine how guidelines relating to the use of AI can be negotiated between students and teachers to ensure that AI tools promote, rather than detract from, language learning and enable learners to foster a sense of independence and autonomy. With these guidelines, it is hoped his students will take an informed approach to using AI tools and use them effectively to reach the goals described in their learning plans. At the end of these presentations, the audience will be invited to reflect on and share their perspectives on using AI tools to support language learning with a focus on learner development.
While generative AI has been around since the 1950s (Cao et al., 2023), in the last year and a half, the release of ChatGPT 3.5 and the influx of similar programs using large language models has left many in the academic world scrambling to find answers of how such powerful machines and education can coexist. Whether one is excited or concerned by the prospect of AI like ChatGPT in the classroom, a question remains: when students can turn to technology for so many answers, how can educators continue to encourage creativity and critical thinking in the classroom? One rather straightforward answer is by exposing students to the realities of ChatGPT and educating them how it can be used. Rather than impairing their ability to think critically, by giving students the space to discover the successes and deficiencies of ChatGPT and similar programs, AI can be a tool to foster critical thinking skills (Chang et al., 2021). This presentation will outline a classroom practice for English-language students at a private, Japanese university. The presentation will discuss the procedure, the positives, and the challenges of a practice in which students analyzed discussion questions created by ChatGPT. Students were required to use higher-order thinking skills by considering the significance, role, and qualities of thought-provoking discussion questions and evaluating generated questions against these criteria. Beyond encouraging critical thinking, the activity highlighted its necessity when adopting new, exciting technology.
This study investigates the application of Neufast, an AI-mediated job interview platform, in the realm of English for Special Purposes (ESP) learning. The research specifically examines students' perceptions of the AI analytics video interview for talent recruitment. Neufast, an AI-driven tool, offers a simulated environment for students to rehearse job interviews, receive immediate feedback, and enhance their performance. The AI capabilities of the platform enable the analysis of both verbal and non-verbal cues, delivering comprehensive feedback on language proficiency, communication skills, and body language. The objective of this study is to comprehend students' perceptions of this cutting-edge tool and its potential to augment ESP learning. The results indicate that the integration of AI in education, particularly in ESP learning, can offer an interactive and personalized learning experience. The study also underscores the potential of AI-powered job interview practice in equipping students for the competitive job market, thereby bridging the gap between education and employment. This research contributes to the growing body of literature on the use of AI in education and its implications for future learning. The findings have significant implications for educators, policy makers, and stakeholders in the education and recruitment sectors.
The provision of written corrective feedback on language learners' writing has been the focus of much investigation over the last three decades. Amongst this literature, there has been growing evidence that the provision of graduated feedback, which increases in explicitness as per learner needs in real time, can increase the learning potential of feedback on writing. However, the provision of such feedback has been limited to oral feedback due to its dialogical nature, with its implementation being reported as too time consuming for most real-life classrooms. This presentation introduces the ongoing development of DynaWrite (ver 1.0)—an online tool which automatically provides graduated feedback on learners’ writing, thus overcoming the aforementioned modality and time restraints. Furthermore, in addition to detecting grammatical errors, the tool also provides automated dynamic feedback on brevity, clarity, objectivity and formality as they pertain to the genre of scientific writing. The proposed web application relays the input of a user to a chatbot based on a large language model. The chatbot is configured to increase the explicitness of feedback when the user is unable to resolve errors. We first briefly describe the theoretical foundation of the tool and graduated feedback. This is followed by a description of the four levels of feedback the tool provides, with level 1 being the most implicit and level 4 being the most explicit. We then discuss the process of error categorization and the process of instructing the chatbot to reliably detect these errors. The presentation concludes with a demonstration of how the tool is used and how it can track learner progress with specific error categories.
This presentation explores the use of AI in evaluating speaking and pronunction. It addresses the challenges in assessing them quickly and accurately, specifically its impact on the metrics of comprehensibility and intelligibility. Pronunciation, in particular, significantly influences comprehensibility and intelligibility, essential components for effective communication. Historically, evaluating these components has been labor-intensive, relying heavily on human raters for comprehensibility and transcription for intelligibility assessment. Such processes are not only time-consuming but also impractical for educators with substantial workloads, thus remaining largely within the realm of academic research. In response to these challenges, this presentation introduces an innovative approach utilizing AI tools to quantify comprehensibility and intelligibility efficiently and accurately. The advent of AI technologies in language learning provides an unprecedented opportunity to streamline these assessments, making them more accessible to educators. Drawing on the authors work on evaluating speaking performance, the methodologies used include elicited imitation approaches, AI generated rubrics, AI-assisted transcription, and the use of other AI assisted tools. These tools facilitate the calculation of intelligibility scores by quantifying the percentage of words correctly pronounced as per a given standard. Additionally, comprehensibility is evaluated through AI-powered analyses of listener effort, aligning with established scales. A step-by-step workflow will be provided, equipping participants with the necessary guidance to apply these AI-based speaking assessment techniques in their own classrooms. This approach represents a significant departure from traditional, subjective evaluations of speaking performance. By leveraging AI technologies, educators can obtain objective, quantifiable metrics of speaking and pronuncation proficiency quickly without having to rely on inaccurate holistic scoring. This presentation, building on the insights gained from previous workshops and studies, aims to inspire educators and researchers to embrace technological advancements in the pursuit of more effective language teaching methodologies for evaluating speaking performance.
In recent years, universities have been dealing with the concerns of the utilisation of artificial intelligence, especially with the emergence of ChatGPT. This technology can not only generate written contents, i.e. human-like responses to questions and prompts, it can also affect assessments and evaluation methods. This paper presents a pedagogical approach that focuses on integrating artificial intelligence (AI), in particular Ernie Bot, a Chinese version of ChatGPT, into the assessment and evaluation process in the context of language education. The approach is designed to enhance the understanding of AI's role in learning, develop skills in designing AI-informed assessment tasks, and cultivate critical AI literacy among a group of Year 3 TESOL undergraduate students in a university located at Guangdong Province, China. Overall, this pedagogical approach enables TESOL undergraduate students to develop awareness and understanding of AI's role in learning, cultivate skills in designing AI-informed assessment tasks, and foster critical AI literacy through hands-on experiences. By integrating AI into language education assessment and evaluation, this approach prepares students to navigate the evolving landscape of technology-mediated language learning environments. This presentation also aims to contribute to the ongoing discussion around the use of AI in higher education assessment and evaluation, highlighting the need for more research and a critical evaluation of its challenges and impact.
While the AI revolution has opened up a lot of conversations around chatbots and image generation, many teachers haven’t considered the ways that AI can be used in spreadsheets to simplify tracking of student information, easily import content into apps like Kahoot, and most importantly, differentiate instruction for each of your students depending on their level, skillset, and interests.
This session focuses on devilishly simple approaches to save you time and money, and turn you into a rockstar teacher while doing it (no banjos necessary!). Participants do not need to be spreadsheet savvy to take advantage of the ideas in this session - if you know how to cut and paste, you’re in for an eye opening experience!
By the end of this session, attendees will know how to use AI to build custom spreadsheets on the fly, how to create instant quizzes (and save over $100 doing it), and how to create fully customized content for each student in your class in less time than it has traditionally taken you to make a single worksheet. And you don’t even have to sell your soul to learn it!
This pilot study explores learners' perceptions of corrective feedback in digital foreign language learning tasks within a Learning Management System (LMS) as complementary tools in Spanish as a Foreign Language courses. While previous research (Li & Vuono, 2019) has extensively examined teacher corrective feedback in oral and written contexts, there is a lack of research on automated corrective feedback in digital learning environments.
A series of studies (Sheen, 2020) has suggested that corrective feedback can draw learners' attention to specific formal incorrect structures of the target language they are acquiring, promoting error awareness and language development. However, the options for corrective feedback in H5P interactive activities within LMSs have limitations in effectively informing learners about the nature of mistakes. Most default feedback forms provide explicit feedback by displaying a green check mark for correct responses, and a red X for incorrect ones. To address this limitation, this study investigates how implicit feedback forms in digital language learning environments can enhance learners' error reflection and potentially develop metalinguistic awareness. Previous corrective feedback studies (Klimova & Pikhart, 2022) have not extensively examined digital educational environments or considered learners' perceptions and preferences regarding explicit and implicit automated feedback.
The current study examines the types of automated feedback provided in beginner-level Spanish as a Foreign Language digital activities at a university in Hong Kong. Questionnaires are used to collect data on learners' perceptions of automated feedback. Preliminary results indicate that students often struggle to understand simple explicit corrective feedback provided for grammar forms in H5P tasks. Therefore, there is a need for tailored feedback options in digital language learning activities. We will also introduce examples of learners' preferred reformulated digital feedback. Understanding learners' perceptions and preferences enables educators to optimize feedback approaches in digital learning platforms, ultimately improving language learning engagement and outcomes.
This presentation outlines a three-step quick-writes activity conducted over a one-year period in 2023 at the intermediate level first-year EFL class at a private university in the Kanto region. The aim was for students to develop fluency in writing, as well as develop a sense of ownership of English through learner autonomy facilitated by DeepL. The first step involved a five-minute quick-write in response to prompts, encouraging students to react to news headlines or textbook content. Using an honor-system, students were asked to refrain from using dictionaries, online translation tools or AI for this step, but could use the right-click function to correct misspellings. The second step was to copy this text into DeepL, whereby English (left) was translated to Japanese (right) by DeepL. Students were asked to read the Japanese for coherence, then make any changes to the English until they were satisfied with the Japanese translation. The third step involved a reversal of languages, with English and Japanese switched, prompting students to explore synonyms and alternative phrasing using DeepL’s suggestions to enhance language skills. DeepL provided a number of synonyms and alternative phrasing, and students were encouraged to take ownership of and to develop their own English language learning. Both the first and the second drafts were submitted to the Google Classroom for feedback. This study focuses on the 2024 fall semester by tracing the development of three students at the start, middle and end of the semester. The study was piloted as a one-year project for the cohort of 2023, yet its success has meant that it has been included in the 2024 syllabus for the incoming cohort.
When communicating in a second language, communicative competence, as defined by Canale and Swain (1980), is crucial. Having the appropriate vocabulary is merely a component of our whole conversation. Current research has suggested that when effectively communicating, the ratio of verbal communication to nonverbal is about 50-50 (Tiferes et al., 2019). Additionally, we must be cognizant of our facial expressions, which can enhance our nonverbal communication competence, particularly in online presentations and interviews (author et al., retracted, 2021). We describe the outcomes of two experiments in this study. Initially, the online mock job interviews of the pupils were recorded in both English and Japanese. Two human judges and an AI emotion analyzer assessed these facial expressions. In the second experiment, we analyzed students' interactions with a simulated online interview using gaze-tracking equipment (author et al., retracted, 2022). Using a participatory action research-based approach, we assess the responses provided in L1 and L2 for conventional job interview inquiries, reassess our procedures, and enhance our methodologies. Our research endeavors to assist students in achieving success during job interviews, irrespective of the language or cultural barriers.
Over the past decade, research has given the substantial evidence of the efficacy of technology-mediated language learning. Computer-mediated communication (CMC) featured with the concept of authenticity – uses of communicative tools and platforms, has become a key issue to be explored in various second language (L2) learning contexts. The main objective of this paper is to present an English course design via CMC for supporting communicative approaches to language teaching (e.g. task-based instruction) and learning, and facilitating EFL spoken and written performances. Participants were 103 non-English-major students, consisting of students from 11 departments from 3 intact classes at a public offshore university in Taiwan. Their English proficiency level ranged from A1 to B1 in CEFR (female= 29%, male = 71%).These students took a required English course in fall semester of 2022 (freshmen) till fall semester of 2023 (sophomores). Each semester lasted 18 weeks with 2 periods per week (50 minutes/ per period). The course was aimed at developing students’ four language skills as well as TOEIC-related skills. In this course, CMC tools and platforms used included e-campus (the university’s digital learning platform), ZUVIO (an online instant response system), e-mail, communication apps (e.g. Line), and other audio-visual tools. Students took pre- and post- listening and reading TOEIC-simulated tests in October 2022/2023 respectively. The results showed that there were no significantly differences between pre- and post-tests; however, the mean scores increased. In English performance, it was found that students improved in their speaking ability, including role-playing and presentation skills in English. Also, EFL students’ attitudes and interests toward English learning were enhanced. This study had theoretical contributions and made some implications for pedagogical applications to communicative approaches of computer-assisted and technology-mediated language learning.
Key words: computer-mediated communication, EFL classroom, authenticity and meaningfulness, communicative approach
With the affordances of advanced technologies, machine translation tools (MTT), such as Google Translate, have been created and widely used to facilitate communication between people from different language backgrounds. The last decade has also witnessed the employment of MTT by English as a Foreign Language learners to assist their studies at university. Such use of MTT is regarded as a “double-edged sword” and thus not often encouraged. Studies have found that MTT can help students with their language use in writing. However, there is a dearth of research on the use of MTT in the writing assessments for an EAP (English for Academic Purposes) course. This study attempts to bridge this gap by examining student and teacher perspectives on using MTT for the assessments of an EAP course in a major university in Hong Kong. Both quantitative and qualitative data were gathered for this research. The quantitative data was derived from surveys of 47 students and 37 language teachers and the qualitative information was collected via semi-structured interviews with 12 students and 10 language teachers. The analysis of the gathered data showed that notwithstanding some commonalities, the perspectives of students and teachers are markedly different on students’ deployment of MTT in the assessments of the EAP course. The study has profound pedagogical significance with respect to assessment designs.
In an attempt to explore the linguistic features in the headlines of fake news, this study selected a sample of 383 fake news articles (Clarke et al., 2021) covered by the SEC’s enforcement action issued on April 10, 2017 (U.S. Securities and Exchange Commission 2017). Through building a corpus for headlines of fictitious news stories, this study was able to conduct both lexical and pragmatic analyses. At the lexical level, the present study found a prevalent use of medical or pharmaceutical terminology in the corpus. At the pragmatic level, guided by the relevance theory, this study explored the distribution of two grammatical categories—definite referring expressions and superlatives and intensifiers—to investigate whether these parts-of-speech contribute to an information gap that enhance readers’ expectation. Employing the tool Wmatrix, this study discovered that definite referring expressions and superlatives and intensifiers accounted for a relatively small proportion of the corpus. However, some strategies used in clickbait headlines to increase the relevance between audience and news titles were observed in certain fake news headlines examined in this study. It is hoped that the findings of this study will not only shed some light on the detection of false information but also direct the attention of news consumers toward implicit or underlying linguistic patterns present in untruthful journalistic language embedded within headlines.
Generative Artificial Intelligence (GenAI) tools, such as ChatGPT, are increasingly pervasive in society and helpful aid for education, but they also have problems. Although AI tools may help students gather information and generate ideas, there is a concern about whether GenAI may deprive them of opportunities to research widely and think critically independently. This study discusses the students' perception of using GenAI tools in assignments based on the questionnaire results from the academic English course at an online university. Twelve participants, three females and nine males from different regions, participated in the online questionnaire. The results show that about half of them answered that they should not use ChatGPT in writing assignments, but the rest answered that they could use it when necessary. About 75 percent responded that writing with the ChatGPT is equally unfavorable as plagiarism, and the teacher should detect the students' wrong use of ChatGPT. The free comment section reveals that many are concerned that repeated use of ChatGPT may lead them to its habitual use and failure in the course. Prominently, while teachers often view the issue of AI misuse as an academic ethics problem, we found that many students were concerned that the regular use of such tools would prevent them from acquiring knowledge, improving their writing skills, and achieving the results of their efforts. As the number of subjects was small and the questionnaire was open-ended, relatively diligent students may have answered the questionnaire, and those who wanted to misuse GenAI may have not. Therefore, the results cannot be generalized. However, this case study has given us some hints on solving the disadvantageous aspects of AI in education, not only from a technical but also a cognitive point of view.
The rise of digital technology in educational settings from its formative years in the 1980s to its current iteration has seemed inexorable and, in recent months, exponential. Natural Language Processing and Large Language Models are now ubiquitous topics of debate when discussing ethical and moral concerns in education. This presentation will begin by examining the ethical and moral issues associated with the use of generative AI in academic writing. It will address concerns related to academic integrity, the potential for AI to facilitate plagiarism, and the cognitive impact on students' critical thinking and learning processes. Subsequently, it asks two key questions: firstly, is it possible to create 'AI-proof' writing assignments, and secondly, is it ultimately more far-sighted to focus on higher order thinking skills and a scaffolded approach when dealing with generative AI? Lastly, the presentation considers the situation in Japan, where rote-learning and memorization are key concepts in learning. The recent decision by Japan’s minister of education, culture, sports, science, and technology to permit content from any source to be used for “information analysis'' means that Japan is an outlier in its approach to fair use and associated copyright and tort law issues. If all input is considered fair game for training generative models, irrespective of content and method, it is prudent to consider whether all output will also be similarly considered acceptable. From this presentation, it is hoped that educators will gain a comprehensive understanding of the ethical considerations surrounding the use of generative AI in academic writing as well as practical strategies to promote academic integrity and prevent misconduct in student assignments.
While artificial intelligence (AI) technologies have started being integrated into language education programmes offered at universities, secondary and elementary schools, little do we know if and how they could be introduced for language learners on the pre-school level. Just as with other types of information technologies, application of AI in early childhood education programmes raises many questions among doctors, educators and parents. This presentation will focus on early stages of designing an AI-based project aimed at early bilinguals and multilinguals aged 4 to 7.
For this research and development project, a small group of experts in language learning, early childhood education and AI technologies conducted a pilot study in which they surveyed and interviewed educators working on speech development of emerging bi- and multilinguals in state-funded and private preschools. Participants of the study were asked to assess their attitude and skills related to AI, as well as to reflect on a newly designed AI-based programme for speech development in a multilingual and multicultural context.
The presentation will discuss preliminary results that speak for the necessity 1) to carefully align speech development curriculum of the native language with the curriculum for developing a second (third) language, 2) to apply CLIL technologies by integrating the study of the world (i.e. topics related to geography, history, and science) with speech development classes, 3) to design AI-based language activities that would develop children’s intercultural competences, creativity and critical thinking skills, and 4) to counteract negative attitudes towards AI among educators and parents.
The discussion of factors that support or hinder designing the content and form of the AI-based programme for bi- and multilinguals will be of interest to early childhood educators, language teachers working with young learners and those involved in developing technology-based educational applications for children.
The increasing prevalence of Artificial Intelligence (AI) technology has constituted a new reality in language learners’ lives, presenting unprecedented opportunities and challenges in language education (Kohnke et al., 2023). In the field of L2 writing, while more studies (Barrot, 2023) have begun to examine the use of AI tools among second-language writers and its impact on learners’ writing competencies, little is still known about individual learners’ decision-making processes regarding the use of AI technology in completing writing assignments.
Conducted at a Sino-foreign cooperative university in China, this study investigates the social and contextual conditions that influence individual learners’ use of AI in EAP (English for Academic Purposes) courses. Drawing from Darvin and Norton’s (2015, 2023) investment model, the study documents three first-year Chinese students’ English academic writing practices over two academic terms. Data from semi-structured interviews, writing process recordings, assignments, reflective writing journals, course materials, and university policies on AI usage were analyzed using thematic analysis.
The findings reveal that learners’ engagement with AI tools in academic writing is socially mediated, and influenced by their identities, perceived resources, and ideologies about AI use. In particular, despite the course policy prohibiting AI use in all assignments, three participants selectively used AI tools at different stages of their academic writing. Their use of AI tools was affected by how they positioned themselves (e.g., being the owner of one’s own work), identified accessible and useful resources (e.g., the writing center, peer reviewers), and navigated ideologies (e.g., competing ideologies from different course instructors) about AI use in their learning environment.
This research underscores the complexities surrounding AI use in academic writing and emphasizes the need for language instructors to recognize learners’ identity needs, access to resources, and existing competing ideologies about this contentious technology when developing policies and regulations for AI use in courses.
In the context of the global digital revolution and its impact on education, this research delves into the perspectives of teachers regarding the integration of Computer-Assisted Language Learning (CALL) in the rural primary school settings of Nepal. The study aims to understand how English language teachers perceive the role of technology in language instruction and to identify the challenges and opportunities associated with the implementation of CALL in resource-constrained, rural environments. Employing a qualitative research design, this investigation gathers data through in-depth interviews and focus group discussions with primary school teachers actively engaged in using computer-assisted language learning tools. The participants are selected from diverse rural regions of Nepal, providing a nuanced understanding of the varied contexts in which these technologies are introduced.
Preliminary analysis reveals multifaceted insights into teachers' attitudes, beliefs, and experiences with CALL. Themes emerging from the data include perceived benefits such as increased student engagement, enhanced language proficiency, and exposure to a broader range of learning resources. However, challenges such as limited access to technology, lack of training, and concerns about cultural appropriateness are also identified. The findings of this study aim to contribute valuable perspectives to educational policymakers, school administrators, and teacher training programs, aiding in the development of strategies to effectively integrate computer-assisted language learning in rural Nepal. By comprehensively understanding teachers' viewpoints, the research endeavors to inform future initiatives that harness the potential of technology to bridge educational gaps and empower students in remote, resource-limited settings.
Research indicates that peer feedback accelerates L2 development in the same way as feedback provided by teachers by triggering noticing of gap in learners’ interlanguage (Sippel & Jackson, 2015). Computer-mediated L2 collaborative writing (CW) provides learners the chances to engage in collaborative dialogue where they give and receive peer feedback. Peer feedback thus serves as a mediational tool for learners to co-construct knowledge (Storch, 2017) and develop their writing skills (Storch, 2005). However, studies (e.g., Hsu, 2019; Rouhshad & Storch, 2016) have shown that learners may not always heed peer feedback in computer-mediated contexts. This may affect their learning. Therefore, it is necessary to identify a technique to encourage L2 learners to process peer feedback so that they can benefit from the knowledge co-construction process afforded in computer-mediated CW. This study explores how written languaging influences computer-mediated L2 CW, focusing particularly on the extent to which it engages learners in processing peer feedback and its effect on the improvement of individual L2 writing.
Participants were 52 EFL learners in two writing classes in a Taiwanese university. The two classes were randomly assigned to the written languaging (WL) and no written languaging (NWL) groups. Both groups completed an individual pretest and posttest expository writing, along with two computer-mediated expository CW tasks using Google Docs across seven weeks. The WL group engaged in written languaging after the CW tasks, whereas the NWL did not engage in such a languaging activity. Learners' written production on the pretest and posttest was analyzed for the quality of content and organization, and linguistic complexity and accuracy. Learner’s written languaging was coded for written languaging episodes and analyzed for levels of feedback processing. The findings aimed to shed light on how written languaging affects peer feedback processing and individual L2 writing development in computer-mediated CW contexts.
The rapid advancement of Artificial Intelligence (AI) has opened new horizons in education, potentially transforming the traditional classroom into a dynamic and adaptive learning environment. This presentation explores the findings of a comprehensive global survey, including educators from diverse teaching backgrounds, teaching experiences, and geographical locations. Our research sought to understand teachers' perceptions of AI in education, focusing on two primary areas: teachers' views on students' use of AI in the classroom and teachers' utilisation of AI for creating class materials.
The first section of our study explores teachers' perspectives on students' engagement with AI tools and applications. Through a selection of multiple-choice, likert scale, and open-ended questions, we examined teachers' familiarity with AI in education, their experiences with AI integration, and their perceptions of the impact of AI on student learning and engagement. We also delved into teachers' opinions on the benefits and challenges associated with students' use of AI, shedding light on their concerns regarding ethical considerations and the potential for overreliance on AI.
In the second section of our questionnaire, we investigated how teachers employ AI to create teaching materials. Our survey inquired about their use of AI-based tools, their experiences with AI-generated content, and the perceived effects of AI on the quality and efficiency of teaching material development. We also probed into teachers' views on the advantages and challenges of incorporating AI into creating educational resources and their willingness to receive further training.
Our research reveals diverse insights from teachers worldwide, offering a nuanced understanding of their perspectives on AI in education. The findings provide valuable guidance for educational policymakers, technology developers, and fellow educators aiming to harness the potential of AI to enhance teaching and learning. We present our results to foster informed discussions and collaborative efforts in the dynamic field of AI in education, ultimately striving to create more inclusive, effective, and personalised learning experiences for students worldwide.
Using teacher-led questions is of utmost importance in the educational setting, as it allows teachers to convey knowledge and ascertain students' comprehension levels effectively. Educational professionals utilise these tools to facilitate purposeful engagements, foster increased student engagement, and shape cognitive processes, logical reasoning, and active involvement among their pupils. Remarkably, the scholarly investigation of the influence of student inquiry in Content and Language Integrated Learning (CLIL) classrooms within Asian English as a Foreign Language (EFL) secondary education remains limited. Consequently, our research examined the frequency and kind of questions CLIL practitioners employ in Taiwanese educational settings. We believed such an investigation would enhance our comprehension of the dynamics between teachers and students, inform our instructional strategies, facilitate student learning, and promote content acquisition. A dataset including 1,405 minutes of recorded material from 39 CLIL classes was analysed. The findings indicate that practitioners employ display, referential, and confirmation checks inside CLIL classes. These practices raise concerns regarding attaining desired pedagogical objectives and hinder learners’ cognitive advancement. The questions employed in the classroom are strongly influenced by factors such as the gender of the teacher, the level of the learners, and the duration of the course. The research has significant pedagogical implications, highlighting the need for CLIL teachers to recognise the value of utilising generative AI (artificial intelligence) tools in promoting effective interactive discourse within CLIL classrooms. Generative AI, like ChatGPT, can be very beneficial in helping teachers select relevant prompts, create diverse questions to encourage students to think deeply and encourage response variation to develop various language skills. We argue the necessity of offering appropriate professional development opportunities for CLIL practitioners to enhance their questioning skills to facilitate interaction, participation, and learning through deploying generative AI tools.
Digital game-based language learning is a growing field of study which has begun to highlight the potential of digital games as language learning tools. Much of the literature in the field has addressed the potential benefit of engagement outside the classroom with digital games for second language acquisition. However, when digital games are used in the controlled environment of a foreign language classroom, the enormous variety of commercially produced digital games makes designing objectively effective digital gaming curricula more challenging. Furthermore, what has failed to be addressed in much of the relevant literature in the field is whether variables such as a player’s target language ability, perception of digital gaming, and personal gaming habits influence student performance in courses using digital games as a method of instruction. One way of approaching how to effectively use digital games as a language learning tool in the classroom is through multivariate data analysis such as exploratory factor analysis (EFA) to identify latent factors correlated with variables such as performance on tests, gaming habits, and attitudes towards a digital gaming curriculum. This study attempts to address this issue by showing how EFA can help educators interested in using digital games to not only better understand classroom data but also to design better curricula. The study itself was conducted over a three-year period in university-level EFL courses and the curriculum was based on the smartphone edition of the single-player role-playing game Life Is Strange (Dontnod Entertainment, 2015). The study identified three latent factors correlated with the variables of vocabulary acquisition from the game, comprehension of the in-game narrative content, gaming habits, and the perceptions of smartphone games as language learning tools. This presentation will demonstrate how these results may help to design more effective lessons which use digital games as an instructional tool.
This study explores English learners’ reactions and achievements for two types of online learning, pre-recoded lectures and real-time lectures in an English reading class during one semester. The participants were 60 Korean EFL university learners who attended an English reading class as a general elective subject. They participated in the pre-recorded class for six weeks and the real-time class through the Zoom application for six weeks. The mid-term and the final exam were administered to assess the learners’ achievements, and one online survey was conducted to examine their reactions. The results showed that the participants preferred to study recorded lectures rather than real-time ones, and they showed the most satisfaction with grammar study in recorded lectures and reading comprehension in real-time ones. It also presented that they attained better scores on top-down reading skills and vocabulary learning for recorded lessons and higher scores on bottom-up reading skills and grammar learning for real-time lectures. This session suggests salient online learning ways for an English reading course.
This session introduces a new 4-level, 4-skills EFL course that develops students’ critical thinking skills while systematically improving their knowledge of essential high-frequency words with both in-text and online learning tools. Utilizing two keyword lists from the New General Service List Project (www.newgeneralservicelist.com), Levels 1, 2, and 3 help students master essential general English vocabulary in the NGSL (New General Service List), while Level 4 focuses on teaching academic vocabulary in the NAW (New Academic Word List). Each word list in the NGSL Project is corpus-based and pedagogically driven, providing the highest coverage in the world for that genre for second-language learners of English. The In Focus series of student books utilizes various techniques to develop students’ critical thinking skills, including exposure to multiple points of view, specific skill development activities, and the use of authentic “political cartoons.” Participants will get a chance to see and use activities from the book as well as the many and varied online learning tools we have developed.
When technology is integrated into educational environments, chatbots, as intelligent technological tools for learning assistance, garner widespread attention. However, despite the potential of chatbots in educational settings, there is relatively limited research on their application in guiding writing activities. Although current iterations of generative AI (GenAI) chatbots, such as ChatGPT, provide feedback on writing, how it facilitates EFL writing is still under-explored. This project aims to explore the efficacy of integrating ChatGPT in EFL writing and examine how EFL learners benefit from the GenAI feedback.
Thirty EFL college learners from a fundamental English composition class took part in the study. The learners were engaged in two in-class essay writing, producing two initial essay drafts (one cause-effect and one comparison). The learners then (1) revised the drafts using GenAI (ChatGPT) generated feedback produced based on the instruction given by the researcher, and (2) wrote a learning log, reflecting on how they revised the drafts based on the AI-generated feedback. The initial and revised drafts were compared regarding the development of topic sentences, the relevance of supporting ideas, the usage of academic language, and idea transition. The learning logs were coded for reflection episodes on topic sentences, supporting ideas, language usage, and idea transition.
The initial analysis of the learning logs indicated that the learners were able to identify the majority of their writing problems through the GenAI (ChatGPT) -generated feedback, particularly the language problems. The revised drafts were also better in language quality compared to the initial ones. The results thus suggest the potential of integrating GenAI chatbots to guide and facilitate English writing in an EFL learning environment.
This presentation introduces 7 open-source, corpus-derived high frequency vocabulary word lists known collectively as the New General Service List Project which were developed by the presenter to help meet the needs of ESL/EFL learners. It will then go on to introduce a large and growing number of free, pedagogically-driven online tools, apps and resources for helping to utilize these lists for teaching, learning, assessment, materials creation and research. The tools include gamified flashcards, an NGSL-specific Wordle game, a video-concordancing study tool, new online placement and achievement tests, text creation tools, and a brand-new AI-powered text profiling tool which helps teachers to generate and then simplify texts to the level of their students.
With an eye towards efficiency, each word list was made to be as short as possible (between 600-2800 words) while still offering extremely high coverage (between 92-99%) of language in that genre, and includes lists for general daily English (New General Service List), spoken English (New General Service List-Spoken), academic English (New Academic Word List), business English (Business Service List), TOEIC English (TOEIC Service List), children’s English (New Dolch List) and fitness English (Fitness English List). Most lists were developed in a modular approach so they can be efficiently mixed and matched to meet a broad range of academic needs. Time permitting, the presentation will also briefly review the growing amount of refereed NGSL research which in addition to confirming high coverage for the intended genre of English also finds excellent coverage for high stakes exams such as the Japanese high school entrance exam, the Japanese National Center Exam and the TOEIC exam.
The present study aims to explore the influence of Virtual Reality (VR) coupled with Artificial Intelligence (AI) feedback on the enhancement of English debate performances among English as a Foreign Language (EFL) students. Scheduled between January and May 2024, the research initiative seeks to address the prevalent issue of performance anxiety during English debates, a significant hurdle for many EFL learners. The integration of VR technology with AI-driven feedback promises an authentic and immersive debate practice environment, potentially reducing anxiety and bolstering debate capabilities.
The investigation will adopt a quasi-experimental design and engage approximately 300 first-year university students over a 12-week period. Participants will be divided equally into two cohorts: one receiving state-of-the-art VR and AI-assisted debate training, and the other subjected to traditional debate training methodologies. The study's multifaceted data collection strategy will comprise pre-and post-intervention assessments to measure improvements in debate performance and changes in anxiety levels, supplemented by classroom observations and focus group interviews for qualitative insight.
Anticipated results from this study include a comprehensive evaluation of VR and AI's efficacy in improving English debate proficiency and in the reduction of anxiety among EFL students. These insights are expected to significantly inform the development of debate curricula, the enhancement of teacher training programs, and the establishment of an advanced online debate course that incorporates VR and AI technologies. The study's implications extend beyond immediate pedagogical applications, contributing to the burgeoning research on educational technology in language education. By providing empirical data on the effectiveness of VR and AI in debate training, the study aims to revolutionize EFL debate instruction and establish a new benchmark for interactive, technology-driven language learning environments.
Virtual Reality (VR) has captured the attention of language educators due to its unique features such as immersion and sense of presence, promising enhanced learning experiences. However, existing research on language learning with VR predominantly consists of short-term studies, focusing on specific linguistic facets such as vocabulary, typically with a small number of students. While these studies provide evidence affirming the capability of VR in improving language skills, the need for more robust longitudinal studies involving a larger student population is undeniable to gain clearer insights into the effectiveness of VR in augmenting learning outcomes. To fill this gap, an eight-week quasi-experimental study was designed targeting Japanese students from multiple universities to compare Zoom as a mainstream video conferencing tool with a browser-based VR platform called Frame in the context of learning small talk skills. Following the flipped classroom approach, the students in both groups (N Zoom = 31, N VR = 26) first individually learned about small talk skills by watching two videos from an online course before every lesson. Using breakout rooms for the Zoom group and private voice zones for the VR group, they discussed the content of the videos in groups and practiced small talk in pairs during the one-hour weekly lessons. Learning outcomes were assessed at three intervals using a pre-test, mid-test, and post-test design for the Zoom group, and the same process is currently underway for the VR group. Significant improvement was observed in the Zoom group's performance from the mid-test to the post-test. This presentation delves into analyzing and discussing the learning outcomes of the VR group, currently under analysis, while comparing them against the baseline performance of the Zoom group. The findings hold significant implications for language educators keen on integrating VR into their teaching practices, offering insights into its potential benefits and challenges.
This study explores the development of an immersive virtual reality (VR) environment designed for independent, implicit learning of English conditional forms — specifically, the first, second, and third conditionals. A group of 20 participants experienced the VR system, and their comprehension of the relationship between conditional forms and verb tense was assessed through pre-tests, post-tests, and delayed tests. The results revealed that, among the three forms, the experience was most effective in enhancing users' understanding of the construction of the second conditional, although the findings did not reach statistical significance. Moreover, there was only a marginal disparity in average scores between post-tests and delayed tests, suggesting the sustained retention of learned content.
In this presentation, we will introduce the developed VR system, outline the experimental design, and present the results obtained. Additionally, we will provide insights and recommendations for future research endeavours in this domain.
This study sought to inform the notion of “task” in Task-Based Language Teaching and Task-Based Language Assessment by describing, comparing, and contrasting the specific language used in different task types through a methodology used in Corpus Linguistics. Previous studies of tasks in TBLT have used the Complexity, Accuracy, and Fluency (CAF) framework to analyze tasks. While this framework has been helpful in finding out the relationship between various factors in the performance of a task, it has been criticized for not providing much information on the specific language used during these tasks by Crawford and Zhang (2021), who have proposed using a Register Analysis (Biber et al., 1999) methodology to break down specific linguistic structures and functions associated with a particular task. This paper performed such a study on description, negotiation, and narration tasks in the ACTFL test, as found in the NICT-JLE Corpus. It was found that each task type has specific grammatical structures characteristic of it and that these grammatical structures were directly connected to the communicative goal of the task. For example, description tasks contain more prepositional phrases (e.g., in the box) than narration tasks, since prepositional phrases are needed for the communicative goal of describing scenes and situations. In contrast, negotiation tasks contain more verb phrases with personal pronouns (e.g. I would like, I want to etc). Such linguistic descriptions of tasks can help educators investigate, analyze, and evaluate student performance on tasks and give insights into task design and implementation. Such analysis, for example, could give insight into the type of language needed to complete a task or the kind of language students will likely use in order to complete a task.
In the era of AI tools like LLMs and Text-to-Image generators, creating personalized curricula has become easier than ever. This presentation is your guide to blending expertise in course development, curriculum design, and graphic design principles with practical strategies using ChatGPT and Adobe Firefly. Concepts covered are applicable to other LLMs or Text-to-Image generators as well. Explore how to tailor curricula to suit students' language learning needs using a powerful AI toolset. Experience a thoughtful exploration of the practical side of integrating AI in education for refined curriculum customization.
Since the launch of ChatGPT by OpenAI in November 2022, it has emerged as a pivotal point of discourse worldwide, particularly within the educational sphere. While the initial fascination about the use of ChatGPT in educational settings seems to have somewhat diminished, its numerous facets are yet to be fully unveiled and understood. One area which requires further exploration is the way language learners can use ChatGPT as a tool for self-learning and how teachers can adequately guide them. Although the benefits associated with using ChatGPT for language learning purposes have been noted by various scholars (e.g., Kohnke et al., 2023), it is unclear whether learners themselves are aware of them and are able to use them on their own. The current study therefore aimed to reveal learners’ perspectives and experiences regarding the utilisation of ChatGPT for self-study. As language teachers hold a central role in guiding learners how to use such tools effectively, the second part of the study was to investigate their perceptions of ChatGPT and their methods of learning about the tool. The study involved two primary data collection procedures: Firstly, to gain learners’ insights, a questionnaire was distributed to students in two intact English classes at a private Japanese university who were offered a short guidance session of how to use ChatGPT on their own to practise English. Secondly, a separate questionnaire tailored for language teachers was administered, accompanied by interviews to obtain a more comprehensive understanding of teachers’ needs and thoughts about ChatGPT. Through the study, the current challenges for the use of ChatGPT as a tool for self-learning are identified, and implications for how to better guide learners in using the free online tool are suggested.
References :
Kohnke, L., Moorhouse, B. L., & Zou, D. (2023). ChatGPT for language teaching and learning. RELC Journal, 1—14. https://doi.org/10.1177/00336882231162868
This presentation will delve into how two tools, the H5P core module for Moodle and LUMI Application (a separate H5P application, which can be used for Moodle and WordPress), which help create interactive activities, with a primary focus on formative assessment. The speaker has personally utilized these tools for a period of two years and will openly discuss his experiences, highlighting both the successes and challenges encountered while implementing various activities. By sharing firsthand knowledge, the speaker aims to provide you with a better understanding of these platforms—what they excel at and where they may have limitations.
During the presentation, the speaker will particularly emphasize speaking activities. Although setting them up can be a bit tricky, as they are not always predictable, the speaker will guide you through the ups and downs of the process, offering practical tips to help you navigate it more effectively.
By the end of the session, you will have gained valuable insights that can enhance your familiarity with the H5P module and LUMI Application. Understanding the strengths and weaknesses of these tools will empower you to make more informed decisions and leverage their potential to create engaging and interactive activities, especially for formative assessment purposes.
The age of AI is finally with us, and brings with it far-reaching implications for language teachers and learners around the world. But how do we utilize the affordances of Large Language Models (LLMs) and other AI-powered technologies for the best learning outcomes?
While the technology may have changed, in many respects the pedagogy remains the same. Students still need exposure to comprehensible written and spoken forms of the target language, and they still need to be given the chance to produce written and spoken forms of the language and receive feedback on their language use. It's still a challenge for students to find the opportunity to use the language productively, and it's still a challenge for teachers to give personalized feedback to every student in large speaking and writing classes. In the context of English language learning in Japan, many students still fall way below the recommended 2500+ hours of engagement with English by the time they graduate from college.
Eigo.AI is the world's first fully AI-powered 4-skills platform for English language learners (that we know of!). It offers a library of lessons on a wide range of topics, from historical events and figures, to important scientific discoveries, and much more. GPT4 is leveraged to provide quality materials in a range of levels, from beginner to advanced. All materials are, of course, also proof-read and checked by humans. Students can benefit from AI-powered feedback on both their speaking and writing skills, and everything can be easily tracked and monitored by teachers. For teachers looking to give their students that all-important extra exposure to English, and solve some of those age-old pedagogical problems with the latest advances in AI, why not try out Eigo.AI in some of your classes next semester? Act quickly before Universal Translation devices make language learning obsolete!
The sudden pivot to teaching in the era of Artificial Intelligence (AI) has necessitated significant changes to the way instructors approach teaching. As AI tools flood the educational sphere, classroom instruction must accommodate the availability of these many and various tools. In some cases, AI tools are embraced as innovative means of gamification, interactivity, and efficiency; in other cases, the tools are viewed as a violation of academic integrity and strictly banned from use with severe consequences. Amidst the ongoing debate on AI’s potential use and misuse, the need for innovating ways to meaningfully integrate and harness existing advantages of AI tools is paramount.
This presentation shares the experience of adapting existing curriculum to integrate the use of AI powered programs such as ChatGPT and Grammarly from the perspective of English language teaching faculty at a large, urban research university in the United States. In particular, it focuses on the process of modifying standards and requirements of specific assignments to ensure AI-enhanced activities can indeed contribute to the learning process. It will also discuss strategies to determine the degree of use and appropriateness of AI-tools in writing and oral skills courses and offer suggestions for developing clear protocols that inform teaching practices and increase student accountability.
The presentation will include a short introduction on common AI tools used among undergraduate and graduate students elaborating on their educational values as well as drawbacks. Attendees can expect to take away tips and best practices on how to adapt existing materials and practices to support AI-enhanced classroom instruction. The session targets professionals teaching and working in a university context seeking to address the complex challenges of the AI-driven era in education.
Corpus technology has been noted as a facilitative tool in data-driven learning (DDL) due to its advanced search capabilities and access to vast linguistic databases. Despite its potential, many teachers shy away from direct corpus use, which involves teacher- and learner-corpus interaction. Based on teacher and learner feedback, this is due to the overwhelming amount of corpus data and unfamiliarity with corpus query functions and techniques. This 30-minute practice-based presentation aims to bridge the gap between teachers and the evolving landscape of corpus technology in the artificial age. First, I will provide a brief introduction to the different types of corpora available for teachers as an open resource. Next, I will discuss some corpus basics using the Corpus of Contemporary American English (COCA) and Laurence Anthony’s AntConc software. Then, I will demonstrate three simple ways to incorporate its technology in the classroom. Finally, this presentation concludes with a set of materials for a corpus-informed lesson following the Illustration-Induction-Interaction (I-I-I) model.
To address the long-standing challenge facing traditional automated writing evaluation (AWE) systems in assessing higher-order thinking, this study built an AWE system for scoring argumentative essays by finetuning the GPT-3.5 Large Language Model and compared the system's effectiveness with that of the non-finetuned GPT-3.5 and GPT-4 base models, or "vanilla" models, using zero-shot prompting methods. The dataset used was the TOEFL Public Writing Dataset provided by Education Testing Service, containing 480 argumentative essays with ground truth scores under two essay prompts. Three finetuned models were generated: two finetuned exclusively on either prompt and one on both. All finetuned and base models were used to score the remaining essays after finetuning and their scoring effectiveness was compared with ground truth scores as the benchmark. The impact of the variety of finetuning prompts and the robustness of finetuned models were also explored. Results showed a 100% consistency of all models in two scoring sessions. More importantly, the finetuned models significantly outperformed the base models in accuracy and reliability. The best-performing model, finetuned on prompt 1, showed an RMSE of 0.57, a percentage agreement (score discrepancy $\leq$ 0.5) of 84.72%, and a QWK of 0.78. Further, the model finetuned on both prompts did not exhibit enhanced performance, and the two models finetuned on one prompt remained robust when scoring essays from the alternative prompt. These results suggest 1) task-specific finetuning for AWE is beneficial; 2) finetuning does not require a large variety of essay prompts; and 3) fine-tuned models are robust to unseen essays.
In Europe, Content and Language Integrated Learning (CLIL) is conducted by non-native English-speaking content specialists who teach their academic discipline in English. This ‘hard’ version of CLIL has been considered difficult to implement at Japanese universities as many content specialists lack the pedagogical skills and teacher training. One way to bridge these difficulties, and a notion that is underrepresented within the Japanese university context, is to create a collaboration between the content teacher and language specialist, which includes team teaching, course planning and materials creation.
This presentation aims to give an overview of a collaboration between a language and science professor who formulated a partnership to teach a CLIL course for Engineering and Science majors at a Japanese national university.
The course was delivered in a hybrid format, with the language teacher physically present in the classroom and the content teacher participating via Zoom and integrating web-based content into the course design. In creating the course syllabus, the language teacher collaborated closely with the engineering professors from Tohoku University who provided a list of lesson topics along with suggested videos for instructional use. These videos were then used as the starting point to create lessons by the language teacher with advice and input from a science professor at Nagoya University who was also invited to team teach the course.
The classes incorporated authentic tutorial videos, online simulations, and original materials, including whiteboard explainer videos and AI-generated imagery for online handouts. This talk aims to provide insights into the process of material creation and delivery within a Computer-Assisted Language Learning environment and how the collaborative efforts of the language and content specialists, each with predefined roles, contributed to the successful teaching of the CLIL course.
Issue: It is now relatively easy to produce speech drafts in English by using AI translation(machine translation & translation by generative AI). Nevertheless, such speech drafts would not make much sense if students do not understand them well. Students may be able to ask questions to generative AI if there is anything unclear about the drafts. However, generative AI is useless unless they raise questions, and it remains silent if they don’t. Also, most learners need encouragement from their teachers to actively raise questions. This is where the presenter suggests that the division of labor between AI and teachers is effective.
Activity: The presenter let college students whose English proficiency levels are CEFR A1-A2 prepare speech drafts for their short presentations in English by using DeepL (free version). They were required to raise questions about the English produced by DeepL to enhance their comprehension, and the presenter answered those questions. The students gave presentations in class without looking at their speech drafts.
Research: The presenter categorized the students' questions. The presenter also checked if the questions could have been appropriately answered by ChatGPT(3.5).
Results: The presentations were successful overall despite the limited English proficiency of the students. It was found that they had relatively basic questions, many of which could be solved by ChatGPT. It was also found that some questions were not appropriately answered by ChatGPT.
Implications: An important role of English teachers in the era of AI is to provide opportunities for learners to actively examine and digest the output of AI translation. It will help them be independent learners of English. Teachers can also take questions that are not solved by generative AI.
How do we handle Large Language Model (LLM) AI in the classroom? Many approach it by creating policies that either restrict, ban, or limit its use to specific AI-focused activities. This can be effective in the short term, however, many underestimate the effect this technology will have on language learners’ futures as global digital citizens. Additionally, if AI is approached through limited implementations, like single activities, this can lead to an inflated sense of what the technology can do while underselling its limitations. There is a need to dedicate extended class time to covering AI so that students understand the real affordances and pitfalls of the technology to make informed decisions about when and how to use it. Accordingly, this presentation will discuss the creation and implementation of a 10-lesson unit that has since been integrated into 1st and 2nd-year courses at multiple universities.
The unit introduces students to the AI landscape, starting with an overview of AI's history and explaining how LLMs work, highlighting their differences from traditional AI. It teaches students prompt engineering, then tasks them with creating a range of AI-generated media, including art, videos, audio, HTML pages, and web games. The lessons are structured following Highland’s (2009) teaching-learning cycle. The unit features a final project where students design a local newspaper using AI. The evaluation focuses on their reflection and the effectiveness of their prompts rather than the product itself. Additionally, the unit delves into ethical considerations like fake news, job displacement, and academic integrity.
Additionally, the researchers developed a bespoke AI teaching assistant web-based chat application that was trained on the lesson plans and materials. The application allows students to converse with the AI assistant to review materials, get help with lesson vocabulary, and ask questions both in English and their native languages.
The shift to online education during the pandemic presented unique challenges, particularly for instructors accustomed to using traditional teaching tools such as whiteboards. In response, I adopted Google Sheets as a whiteboard to facilitate student interaction and provide a platform for submitting and checking assignments. Initially, correcting grammar and vocabulary in these submissions was somewhat time-consuming. However, with the advent of AI, specifically ChatGPT in late 2022, my approach evolved. This presentation will detail the integration of the ChatGPT API with Google Sheets, enabling immediate feedback on speaking and writing assignments. Students enter their work into Google Sheets, which ChatGPT promptly evaluates, offering corrections and suggestions. This accelerates the feedback process helping to reduce the instructor’s workload. Nevertheless, the system's implementation revealed hurdles, exemplified when students misunderstood alternative phrase suggestions as errors. To mitigate this, a ‘Noticing Log’ was created for students to note and reflect over the variances between their submissions and ChatGPT's ‘suggested corrections’. This process results in the generation of customized review materials and vocabulary tests in GIFT format for Moodle by ChatGPT, optimizing the assessment procedure. The method not only conserves time but also delivers tailor-made educational content to each student. This discussion will encompass the method's practical application, its advantages, and its constraints, providing attendees with a comprehensive understanding of AI's role in my classes.
While initial second language (L2) literature on ChatGPT was limited to descriptive articles that outlined the potential impact that the AI chatbot might have on the field, empirical research on the AI chatbot has begun to appear. For instance, researchers have examined the tool’s effectiveness for L2 writing assessment (Mizumoto & Eguchi, 2023; Pfau et al., 2023) and L2 teachers' and students' perceptions of the generative AI chatbot (Jeon & Lee, 2023; Yan, 2023). Nonetheless, empirical L2 research on ChatGPT is still needed, particularly in the context of self-regulated language learning. Moreover, it is unknown how prevalent generative AI use is among L2 learners. This presentation reports on a survey-based study that fills these gaps in the literature by addressing the following two research questions:
1) What are Japanese university English as a foreign language (EFL) students’ practices of ChatGPT for self-regulated L2 English learning?
2) What are Japanese university EFL students’ perceptions of ChatGPT for self-regulated L2 English learning?
A mixed-method research design was utilized to answer these two research questions, with survey data being collected from two private universities and one national university in Japan. At the start of the fall 2023 semester, the survey was administered and a total of 521 respondents fully completed it. While qualitative analysis concerning the EFL students’ perceptions of ChatGPT is still ongoing, quantitative analysis related to the students’ practices has revealed some interesting findings. For example, less than 25% of the participants (n =130) had used ChatGPT in their English studies. Summarizing information written in the English language and translation were the top reported uses of ChatGPT for L2 English learning. Full results of the research and the study’s implications will be discussed during the presentation.
We are developing a language-independent authoring system for teachers to construct listening and speaking drills integrated with technologies such as speech analysis, assessment, recognition, and synthesis. With this system, teachers can prepare online drills composed of various tasks customizable flexibly to the objective of the class and learners' needs in the class. The tasks implemented so far are reading aloud, repeating, paraphrasing, shadowing, script-shadowing, explanation, and overlapping. In any task, learners' oral responses are automatically processed to be visualized and/or scored using word-based transcription, phone-based transcription, rhythm, intonation, listening fluency, etc. Teachers can use these results selectively as encouraging feedback for their learners.
For learners who want to improve their listening skills, shadowing tasks are suited, where learners shadow input speech without referring to any text. Shadowing speech is analyzed to visualize listening disfluency measured while shadowing. With this visualization, learners can check what kind of inputs tend to cause listening disfluency.
For learners who want to improve their prosodic and segmental aspects of pronunciation, overlapping tasks may be preferable. The rhythm and intonation patterns of a learner's speech are visualized and compared to the corresponding model patterns. Also, his/her pronunciation is examined and compared to the model via word-based or phone-based transcription. All or some of them will be used as feedback to learners.
The authoring system has been already introduced to create daily aural/oral drills for university students, which has been very well received and found to be effective to improve the students' oral proficiency. In the presentation, the authors will demonstrate how to construct some listening drills and speaking drills. They would also like to receive any request from teachers to make this system meet educational and practical needs better. This system will be made publicly available in the future.
Students often need good practice not only for English oral presentation but also for oral answering in Q&A sessions, especially when they present their research results for the first time at international conferences. In this study, we aimed to assist those students in improving their oral skills in Q&A by having ChatGPT read the papers, which were written by the students, and ask general and specific questions on the papers to the students. We examined six configurations of ChaGPT for question generation to find the optimal one for this task. The six configurations are a combination of whether reference papers should be explicitly uploaded to ChatGPT or not, and whether questions should be generated for each keyword, for each section, or not specifying any scope of the paper. We recruited eight master students who recently attended their first conferences and made oral presentations there and eight Japanese teachers of English. Each student evaluated the questions of his/her paper in terms of relevance, specificity, clarity, inspiration, and expectedness, but the teachers evaluated all the 471 questions paying attention to whether the questions were likely to be raised by language teachers or only by specialists. Finally, all the students and teachers selected the best configuration for Q&A practice. It was very unexpected to us but their responses indicated that the simplest configuration, where reference papers were NOT uploaded explicitly and any scope was NOT specified, was selected as the best one. Following this result, ChatGPT in the simplest configuration was integrated with a speech interface, ASR and TTS, so that ChatGPT can ask orally and students can answer orally. Although the integrated system has not been evaluated yet, we believe that it can help students prepare for their first encounter with international academia.
In the realm of artificial intelligence (AI) translation, concerns regarding gender bias have gained prominence, warranting thorough investigation. This research endeavors to scrutinize the translations of marriage-related texts in AI’s English-Chinese translation through the lens of gender. Drawing inspiration from Simone de Beauvoir’s seminal work The Second Sex, which delves into the intricacies of gender dynamics, this study aims to compare its translations produced by AI with those crafted by female and male translators. The original book, Le Deuxième Sexe, was first translated from French into English by Howard M. Parshley in 1953, which served as the sole source text for Chinese translators in 20th century. The English version has been translated into Chinese by the female translator Yang Meihui and published in 1973, and later by the male translator Tao Tiezhu and published in 1998. Employing a corpus linguistics approach, this research will meticulously extract and compare texts pertaining to marriage in ChatGPT-generated translations and male and female translations of The Second Sex. By systematically comparing the translations, we aim to uncover potential instances of gender bias inherent in AI translation systems. Through this comparative analysis, we seek to elucidate whether AI translations mirror or diverge from the gendered nuances present in human-authored translations, particularly in the context of marriage-related discourse. This study not only contributes to the burgeoning discourse on gender bias in AI but also offers insights into the implications of automated translation systems on gender representation and perception.
Keywords: AI translation, gender bias, marriage-related text, The Second Sex
With the increased interest in integrating artificial intelligence (AI) into language education, this practice-based research project explored the use of ChatGPT's audio functionality to enhance Japanese students' oral communication skills. Japanese university students often complain that they have little chance to practice their English skills. To address this issue, the following research question was devised: Can ChatGPT effectively serve as a conversation partner for Japanese students aiming to improve their oral communication skills?
Students participated in interactive sessions with ChatGPT, where they were required to interact through the speech-to-text function of the ChatGPT smartphone app. Following this, the students were given a reflective survey. This reflective process aimed to draw out students’ opinions on and feelings about their interactions. This presentation is based on that data and will shed light on the students' perceptions of communication with an AI interface. The presentation will begin by explaining how students were taught to use the chatbot, followed by an analysis of their reflections on that experience. Next, the viability of employing AI chatbots as language learning tools will be discussed. As AI technology progresses, there are broad implications for implementation in educational settings, this presentation will present an example of one way to use an AI chatbots for oral communication and propose some recommendations for the integration of AI technology in language learning classrooms.
The recent development of neural machine translation has raised new questions regarding human-machine tandem translation, with machine translation post-editing (MTPE) being one of the key issues. To address the gap created by the lack of process-oriented post-editing (PE) observations via neural machine translation systems, the current project explored post-editing processes undertaken by translation trainees using the cloud-based computer-assisted translation platform Termsoup. Employing a mixed quantitative and qualitative approach, the project examined post-editing processes through Shih's (2021) three-tier model where PE changes result from both text and communicative functions.
Ten translation trainees were assigned 12 post-editing tasks spanning three distinct text types (informative, expressive, operative) in two language directions (Ch-En & En-Ch) in an undergraduate translation course in Taiwan. The analysis focused on the trainees' post-editing changes and errors, particularly in relation to the dominant communicative elements/functions in different text types.The PE changes and errors are further interpreted in conjunction with the trainees' PE logs, interviews, and data derived from the average PE time and number for each translation segment.
Preliminary findings indicate that trainees demonstrated the ability to identify varied communicative functions across different text types, employing diverse strategies to tackle the tasks. However, not all PE changes were deemed appropriate or accurate. An examination of PE errors, efforts, and student self-reports reveals that trainees tended to rely more on machine translation when tasks were perceived as 'difficult to translate.' This is also the context where errors in machine translations might persist. Meanwhile, trainees displayed mixed attitudes towards machine-assisted translation. Some praised how MT-assisted translation reduced cognitive demands in deciphering source texts and provided lexical or grammatical support. Conversely, some argued that post-editing could be more cognitively taxing than translating from scratch.
"To commit ourselves to the modernization of Chinese language education through the application of information technology to different areas including curriculum, pedagogy and assessment," reads the mission statement of the Chinese Language Centre (CLC) of the Hong Kong Polytechnic University. CLC has long been participating in a number of Computer Assisted Language Learning (CALL) initiatives, including flipped classrooms and blended learning. This presentation will center on our latest efforts to include artificial intelligence (AI) components, such as GenAI, into our Discipline Specific Requirements Chinese (DSR-Chinese) courses.
Since 2013, CLC has customized DSR Chinese courses for students enrolled in various professional programs. In order to improve their professional language proficiency and get ready for the workforce, students receive workplace Chinese training. With the tremendous effort by our DSR Working Group, CLC has launched projects like "AI-assisted and Inquiry-based Chinese Language Learning for Authentic Learning and Assessment" in response to the recent advancements in GenAI. These projects center on the use of GenAI in the teaching and evaluation of writing, speaking, reading, listening, and integrated skills. Students have experimented with using GenAI tools for essay writing and evaluation when it comes to reading and writing. Students have built their own digital humans to practice speaking and listening, and they can participate in group or individual presentations.
At the end of the semester, semi-structured interviews and focus groups revealed that students were content and welcomed the use of GenAI in Chinese language instruction. They said that with AI's help, learning a language became more enjoyable and engaged, which enhanced their academic achievement. Participating teacher representatives in the focus group discussion agreed that using GenAI in the course was essential, especially for encouraging writing. Furthermore, neither students' writing nor teachers' marking of the assessments this semester was hampered by the addition of GenAI components.
ChatGPT has already revolutionized our ability to have conversations with a computer. At present this is generally limited to typing rather than speech itself. Utilization of speech synthesis and recognition (SR) technology for English speaking practices could present alternative methods to assist in the development of speaking and listening skills. However, there are still challenges when utilizing such technologies in the classroom. In human conversation we are acutely aware of visual and auditory clues that indicate when it is our turn to speak, and at present, this technology is not present within AI models, and thus presents a significant challenge to the learner utilizing SR technology. Moreover, native speakers can recognize when ChatGPT goes 'off script' while this can be challenging for L2 learners. This presentation will elucidate how we addressed these issues and how effective they were. First, we developed a system that integrates SR and ChatGPT allowing English learners more opportunities to practice speaking and listening skills in a customizable framework based on the course needs and student needs. We then recruited L1 and L2 participants from various countries who interacted with our ChatGPT-SR speaking program to determine the feasibility of using such technology in the classroom, what difficulties students encountered, and actionable techniques that instructors can apply in their own classroom.
Collaborating with professionals with different expertise is important to accomplish cross-disciplinary tasks. In the context of school, it means setting up a collaborative mission to enhance students’ cross-disciplinary knowledge and build up a positive attitude toward such collaboration. Such issue is explored in this study by proposing a mission for pre-service teachers of engineering and English to teach high school students through English about the build and use of robotic arms. Participating pre-service teachers need not only to collaboratively learn the related knowledge and skills from each other but also co-design and carry out the teaching. This study investigates the collaborative work among these pre-service teachers by observations and interviews. The results reveal that the participants’ autonomy has been evidenced during the process of Learning by Teaching, in which meta-cognition and meta-curriculum about cross-disciplinary pre-service teacher training seemed to emerge. The results and discussions should cast a new aspect in addressing the cross-disciplinary work for pre-service teacher training.
The recent introduction of advanced generative AI has had a noticeable impact on many fields, including higher education language learning. This offers new opportunities, including personalized learning experiences tailored to individual students’ needs, around-the-clock accessibility, immediate feedback mechanisms, and adaptive content delivery to enhance learning outcomes. However, in Japan's predominantly test-oriented and teacher-centered context of English education, students have limited opportunities to leverage AI technology to develop diverse skills necessary for future careers. The learning is often passive and focused on memorization, understanding, and testing, while skills like analysis, evaluation, and creativity are also crucial. Therefore, we propose an AI-enhanced language teaching method that offers avenues for developing higher-order thinking skills (HOTS) like analysis, evaluation, and creation in English, along with lower-order thinking skills (LOTS), as delineated by the revised Bloom’s taxonomy (2001) for educational goals. While improving LOTS is easier in regular classes through passive learning, learning HOTS needs deeper engagement and critical thinking. For example, developing HOTS necessitates activities like synthesizing information and presenting appropriate solutions. Through AI-integrated learning, students could enhance their ability to an- alyze and evaluate the outcomes generated by AI, create and expand their knowledge, and develop language competency. The proposed method also leads to tailored learning at students’ English levels and personal interests, which further boosts learning. After attaining a certain level of HOTS and confidence, students can apply the learning outcomes to academic and career-based settings. While AI-integrated language teaching raises concerns about limited human interaction, privacy issues, algorithm bias, and accessibility challenges, it offers benefits within the context of Japanese higher education. By providing a safe and non-judgmental environment, AI-powered language learning helps mitigate communication barriers and foster student confidence, overcoming hierarchical constraints from English classrooms. Ultimately, this aids the digitalization of the Japanese education system and creating global human resources.
Figurative language in general, and sarcasm in particular, is a difficult concept for learners of English to fully grasp and successfully use (Wallace, 2014; Kim and Lantolf, 2016). While correct understanding of sarcasm may not have significant application in day-to-day business, for example, it is nonetheless a regular part of many native speaker’s language usage (Angraini, 2017). Some of the many reasons that people use sarcasm include criticising, in a less-threatening manner (Claridge, 2011, Tracy, 2002), for stress-relief (Ducharme, 1994), in order to persuade others (Sopory and Dillary, 2002) and to improve relationships with others (Haiman, 1993). For non-native speakers (NNS) who intend to ingratiate themselves in the culture, as well as the language, of first-language-English communities, there are many benefits to understanding it (Genc and Bada, 2005; Kuo and Lai 2006). Despite maybe being consider the gold standard, native speakers also can struggle with accurate sarcasm detection (Abercrombie, 2016.
As it turns out, natural language processing (NLP) models also struggle to always detect whether utterances are sarcastic or not (Wallace, 2014). This study aimed to answer the following research questions:
What patterns can be found through error analysis of sarcasm detection between NLP models and native and non-native speakers of English?
What applications of these patterns are there for language education?
Three models (RoBERTa, DeBERTa and a logistic regression model) were used to analyse 300 comments from the FigLang 2020 Reddit Dataset, with and without context. The same 300 comments were also given to 39 NNS and 30 native speakers, and the results were compared. The key findings will be shown in this poster presentation.
In the past two decades, virtual exchanges driven by technological advancement and a desire to promote engagement among students in different countries and advance intercultural understanding have become widespread globally. A mounting number of studies points to potential benefits of virtual exchanges such as increased autonomy, improved motivation, a sense of learner agency, enhanced intercultural awareness and understanding for students with different language abilities. Moreover, a growing body of literature reports on synchronous and asynchronous exchanges connecting different levels of students, mostly lasting several weeks or a whole semester or more. In this presentation, we report on a short-term synchronous virtual exchange undertaken between university students in Japan and Taiwan. First, we describe the process of implementing this exchange at the respective universities, the students who participated in this exchange, the activities they completed, and the technology used. Then, using reflective narratives from the three organizers, we consider the cognitive, emotional, and motivational benefits this exchange had on our students, the difficulties encountered as organizers of the exchange, and possible ways to improve and continue this exchange in the future. Finally, we present a general outline and tentative roadmap for other educators wishing to offer meaningful experiences through international intercultural exchanges for their students.
Artificial Intelligence and large language models are most frequently associated with Open AI, Google Bard, or Microsoft CoPilot (Bing). Most of these AIs are paid services; however, building and running your own AI is surprisingly simple. Moreover, there is no need for expensive server hardware; it is possible to run your own private AI within your laptop or even on a Raspberry Pi. Running your own private AI comes with many advantages. For example, you can choose exactly which model(s) to make available to users and how much computing power to allocate to it; being self-hosted and owned, you retain control of your data and are not giving it away to corporations; you retain complete freedom to use the software and your hardware in any way you choose. This poster will present Ollama, a simple, free, open-source platform for launching and running AI models. Installing on your devices is as simple as running a single command or launching a docker container. There are several open-source models available that have been trained for a variety of different purposes. Adding a web user interface can also be done, connecting to the Ollama API for your own private "ChatGPT" without needing so much as an external internet connection. Participants will also be able to try out the AI as we will bring one along with us to demonstrate.
This poster presentation exhibits a method of language acquisition, employing Virtual Reality (VR) photo-based tours as a means to appreciate a foreign language and the cultures that are connected to it.
In this poster presentation, I shall discuss the development of VR Photo-Based Tours to address the needs of three different projects that incorporated the linguistic and cultural elements of English, Filipino, and Japanese. The tours incorporate interactive components, such as hotspots accompanied by lexical labels and situational discussions.
The study utilized a mixed-method research approach, incorporating both quantitative assessments of vocabulary retention and qualitative input on learner involvement.
The study investigated the relationship between spatial presence, which refers to the feeling of "being there" in a virtual reality (VR) environment, and situational interest and motivation in language learning. The findings revealed a notable and favorable correlation, indicating that increased levels of spatial presence in virtual reality tours augment the situational interest in language acquisition. The qualitative findings emphasize the importance of ergonomics, the physical space in which the tours are experienced, and the scenic novelty of the place being featured in the tours.
Incorporating VR photo-based tours into language teaching shows potential for improving learner engagement and efficacy in acquiring vocabulary. The strong association between spatial presence, situational interest, and learning outcomes presents opportunities for incorporating technology in educational environments.
In this poster session, a wide variety of commercial apps used for language learning will be demonstrated, including Immerse-me, Body Swaps, Mondly, Noun-Town, and VR-Chat. Participants will also be able to demo and try the apps. After trying out the apps attendees will be asked to fill out a survey regarding their virtual world experience with variables focusing on iconicity, approach to language learning, and immersion in the app. Iconicity is "a quality of resemblance between the form and meaning of a symbol." In the context of VR, iconicity refers to how closely the graphical representations resemble their real- world counterparts. The concept is central to the experience of VR as users are either interacting with the graphical representations in the app itself or communicating with other users on top of the graphical design. While the uses of VR are evident for scale-based fields such as architecture and telemedicine, the uses of language learning are less evident. In language learning VR apps, the design of virtual worlds and graphical representations often have varying degrees of iconicity, ranging from realistic to symbolic depictions of real-world objects, which may impact the learning experience differently compared to VR apps in other fields. Research has shown that the real-world spatial environment affects language acquisition and perception (Fort & Schwarz,2022). Therefore, it is crucial to understand how the design of the virtual environment may affect language learning and student acquisition when we use VR for language learning. The survey results will help inform teachers, XR designers, and students by providing insights into how the typological differences between virtual reality apps may impact language learning outcomes and user experiences.
Utilizing virtual reality (VR) yields reported benefits in foreign language acquisition, offering a detailed content presentation unavailable through traditional methods and enabling experiences previously deemed dangerous, time-consuming, or cost-prohibitive (Feng et al., 2018; van Ginkel et al., 2019). Conversely, authentic task-based learning is recognized as advantageous in foreign language acquisition, with recent emphasis on authenticity for task activities, grounded in the belief that a disconnect between learning situations and real-life language use hinders effective communication.
This practice-based presentation will illustrate the seamless integration of VR into task-based language activities, aligning with language learning objectives. The VR task involves taking learners on a virtual tour of their favorite places, requiring them to introduce and describe these locations to their peers using a map application. The task, set at the A2 level of the Common European Framework of Reference for Languages (CEFR) to match learners' proficiency, employs the "Oculus Quest 2" VR headset and a map app incorporating Google Street View into VR, chosen for its user-friendly interface, even for VR beginners. Practical tips, including head-mount display (HMD) setup and management with multiple students, will be shared, along with detailed directions for instructors initiating their first HMD-based activity.
Survey results from learners will be presented, introducing the impact of the authentic VR environment on independent engagement and its role in facilitating output activities. Additionally, the survey outcomes will elucidate shifts in motivation toward foreign language tasks induced by the immersive VR experience.
The integration of artificial intelligence (AI) in educational settings has opened new avenues for language teaching and learning. Previous studies (e.g., Pack & Maloney, 2023; Pfau et al., 2023) have presented a potential for AI in language teaching. Building on this foundation, this study explores the utilization of ChatGPT, a sophisticated AI language model, in the context of English as a Foreign Language (EFL) instruction in South Korea. It focuses on how EFL instructors can effectively use this tool to enhance their teaching.
Five Korean EFL instructors were selected to investigate how ChatGPT can be utilized in English teaching. An online workshop was conducted to provide insights and guidelines for effectively integrating ChatGPT as a teaching assistant in English language classrooms. Following the workshop, the participants completed a questionnaire about their experiences and perspectives on using ChatGPT in their teaching practices. They were then invited for in-depth interviews to gather qualitative insights into this issue. These interviews were analyzed to uncover the instructors' initial impressions, their strategies for integrating ChatGPT into their teaching methods, the challenges they faced, and the perceived benefits for their students.
The findings indicated that while the instructors recognized ChatGPT's potential as a valuable teaching aid, its effective use necessitated active engagement with the tool. They emphasized the importance of understanding ChatGPT's capabilities and limitations to tailor its use for language teaching. The instructors also noted the need for a strategic approach in integrating ChatGPT into the curriculum, highlighting its role in complementing traditional teaching methods rather than replacing them. The study concludes that ChatGPT can be a powerful tool for EFL instruction when used thoughtfully and interactively by teachers. It offers recommendations for EFL instructors on effectively incorporating ChatGPT into their teaching, including continuous experimentation and adaptation based on student needs and learning objectives.
This poster presentation proposal explores the role of facial expressions in online classes and interviews and introduces the use of a specialized prompter as a potential tool for improvement. First, the study emphasizes the significance of facial expressions in online communication between teachers and students and their impact on communication between potential interviewers and interviewees (author et al., retracted, 2022). The second phase of the presentation will show the various hacks we have tried to engage in “eye-to-eye” communication via online conferencing tools such as Zoom and MSFT Teams (author et al., retracted, 2023). Finally, we will compare images captured with the specialized prompter and those taken using a computer camera. Those images will be analyzed using both AI and gaze technology. Our comparison study aims to find any differences in facial expressions that can be seen or detected. This research will provide real-world evidence for the possible benefits of using a prompter to improve interviewee presentations.
The abrupt pandemic-driven shift to online learning fractured traditional academic spaces, exposing deep vulnerabilities in communication, collaboration, and the very foundation of the stable academic subject. This study, drawing on post-qualitative inquiry and Deleuzoguattarian rhizome theory, recodes this disruption not as a mere obstacle, but as an unexpected and fertile ground for transformative learning with computer-mediated communication technology (CMCT). Interwoven narrative maps, rich discussions, and intimate conversations with three undergraduate students and three professors at the University of the Philippines Los Baños reveal their diverse and often poignant experiences navigating this tumultuous shift. Mumby's Discourse of Vulnerability is reframed here, not as a deficiency, but as a potent catalyst for collaborative experimentation and dynamic interactions fostered with the unique affordances of CMCT. This study emphasizes the crucial importance of embracing messiness, fluidity, and contingency in the digital classroom. By meticulously exploring how CMCT reshapes communication, collaboration, and even identity formation in these fluid digital spaces, it offers valuable insights and practical guidance for educators and researchers seeking to cultivate more open, inclusive, and resilient academic environments in the age of online learning. The analysis revealed how shared anxieties and challenges spurred collaborative experimentation and dynamic interactions, akin to the rhizomatic model, (re)shaping online learning communities using computer-mediated communication technology. The findings emphasized the importance of embracing messiness and contingency in online spaces, fostering collaborative meaning-making through open communication, and rethinking academic success beyond traditional metrics. This rhizomatic recoding, driven by CMCT and vulnerability, points towards a future where learning thrives in dynamic, interconnected networks, not only redefining academic success but also transforming the very landscape of education itself.
We describe a new annotation framework and dataset for generating pedagogically effective automated feedback comments on second language (L2) learner writing.
Recent advances in natural language processing (NLP) have contributed to the development of myriad AI writing assistance programs, such as Grammarly or DeepL Write, that can correct grammatical errors in text. The use of such applications has become widespread among learners of English. However, while these systems are effective at improving text, they are not optimally designed to facilitate language learning. They favor direct revisions, often with a click-to-fix functionality that can be applied without the learner considering the reason for the correction. Meanwhile, depending on the error type, learners may benefit most from level-appropriate elaborated feedback in the form of strategically indirect hints, especially on generalizable grammatical rules.
To support the generation of such comments, we introduce a hierarchical annotation schema labeling errors by pedagogical feedback point using such terms as “relative clause” or “comparative,” and describe plans for a dataset of learner sentences annotated with such labels. We additionally tag errors as “generalizable” or not, allowing differentiated comment strategies based on this factor. Furthermore, we add an example feedback comment to each error in the learner writing corpus. These feedback comments are labeled as either direct corrections or hints.
A dataset annotated in this fashion can inform data-driven methods for producing feedback that better targets the points learners are struggling with and more closely resembles the angles taken by teacher feedback. This annotation framework represents a novel attempt to bridge insights from education and AI to provide truly supportive intelligent writing and learning assistance technology.
Compared to the outdated method of rote memorizing, the opportunities for learning English have increased with the advent of artificial intelligence technology. This study used Lingodub, an AI-based English learning application that allows students to engage with different types of English content. By offering a wide range of themes and characters for dubbing, students can practice English by dubbing scenes from various settings, or they can improve their conversational English by dubbing dialogues from movies or TV shows. The variety of themes and characters not only keeps students engaged and motivated but also exposes them to different vocabulary, grammar structures, and cultural contexts, enhancing their overall English-speaking abilities. Students can practice pronouncing new words by listening to demo recordings and trying to mimic them to enhance their English-speaking skills. The AI technology can analyze students' recordings and compare their pronunciation to a standard pronunciation model. It can identify areas where the student's pronunciation is unclear or inaccurate and provide feedback and suggestions for improvement. This application produces a more immersive and interactive learning environment by allowing students to pick interesting dubbing themes and personalities. In this way, students' curiosity and attention were increased, and their motivation to learn English-speaking skills was enhanced. However, students who do not frequently use recording devices may not be accustomed to hearing their recorded voices. In this case, teachers or peers can help students by listening to the recorded content and providing them with oral reading guidance or comments, thereby lessening the circumstances in which students are not used to hearing their own voices during the learning process. This can help students express emotions better when dubbing, which in turn increases their willingness to study English-speaking skills.
The rapid rise of ChatGPT and similar large language models (LLMs) has given teachers access to a new and potentially valuable resource. But while many teachers may focus on the potential in-class applications of artificial intelligence, there is also potential for LLMs to support teachers outside the classroom. This poster presentation will explore the use of ChatGPT and similar LLMs in lesson planning and materials development.
ChatGPT may be used by educators in a number of ways: it can generate texts, create comprehension and discussion questions, suggest activity and project ideas, and even design lessons and entire syllabi. The presentation will examine ChatGPT’s abilities and evaluate its effectiveness as a lesson planning tool, and provide suggestions on how language teachers can make effective use of LLMs. Also highlighted will be the potential limitations of using LLMs, notably the risk of hallucinations or factually incorrect responses, and how best to prevent these problems from arising. By being aware of both the opportunities and limitations of using AI, teachers can make informed decisions on how (and whether) to use it.
The increasing trend in education involves teachers and students utilizing various online tools and learning management systems (LMS) like Google Workspace, Moodle, Canvas, and Blackboard. Among these, Microsoft's LMS offering is worth considering. This presentation highlights how Microsoft Teams is user-friendly and straightforward for educators to set up and use in their classrooms without needing extra help. Microsoft Teams stands out as it operates effectively right after installation without the need for any additional plug-ins or customizations. It includes widely-used software like the Microsoft Office suite, which is prevalent in many businesses, thus providing students with practical, real-world experience both during their studies and beyond. This presentation not only sheds light on the digital learning platforms prevalent in higher education, particularly focusing on Microsoft Teams as an LMS but also provides valuable recommendations and insights on how to optimize Microsoft Teams for Education. This aims to enhance the learning experience in both academic and professional settings.
In the Japanese education system, learners of English commonly struggle with certain areas such as writing and speaking, and the course evaluations requiring these skills. Furthermore, the desire for greater feedback from teachers is often brought up in class surveys. There are several factors contributing to this, such as fear of mistakes, reluctance to speak, instruction focusing heavily on exam preparation, and time constraints for teachers to assess individual students’ skills with large amounts of personalized feedback.
This poster presentation examines how technology was used to offer additional opportunities for classes of up to 18 EFL students, on a Global Issues course in a Japanese university, to build on their English outside of the classroom and receive additional feedback.
This presentation explains which different forms of software were used and how the feedback assisted with in-class topics, course assessments, and an end-of-course presentation. For each task, students were given opportunities to practice, re-record/write, submit, and review their work.
The technology consisted of using an LMS, AI tools for feedback on written and speaking tasks, Flip for spoken assessment and peer feedback, a graded news website, and presentation software.
The presenter will also explain how the different forms of technology were implemented, and how these approaches were received by the students. Additionally, any limitations and challenges of these approaches noted by the teacher and students will be addressed.
This research explores the value of generative AI as a tool for supporting multimodal reading and writing in the creation of crime stories. Currently, at this research site, the use of generative AI for expository essays is against program policy, but narrative creation is allowed as a no-stakes activity. To elucidate patterns in which language students explore and use generative AI as part of their language study, This study at a Japanese university, explored the tools and strategies that English students employ when given the task to work with a classmate to iteratively prompt ChatGPT (or equivalent LLM AI) in the creation of a crime narrative. The research questions were, 1) What strategies and tools do students use when crafting prompts to develop a crime narrative? 2) What concepts and skills are needed for students to generate comprehensible output that meets the criteria for a mock trial narrative? CEFR B2- English students (in pairs) chose a crime and recorded their iterative input and output from ChatGPT in developing a crime narrative. Data from three classes were analyzed. Pairs of students were observed in the process of using the AI and field notes were taken. Follow-up interviews and observations were used to validate prior data. Finally, a survey of language program students was conducted to elicit more general attitudes toward and patterns of use regarding generative AI tools. The results of this study will elucidate basic issues regarding L2 student prompt authoring and revision, the types of online tools students use, and strategies needed to create a crime narrative with the help of AI systems. Implications for L2 learner, human-AI hybrid authorship of narratives, and the normalization of tools like ChatGPT for language learning through hybrid authorship will be discussed.
This research investigates the feasibility of using ChatGPT to create authentic English learning materials. Our hypothesis posits that if neither teachers nor students can distinguish between ChatGPT-generated and human-made materials, ChatGPT could be a viable tool for future language learning material development.
We first compared the time required to develop similar materials using ChatGPT versus traditional human methods. To assess the quality of the ChatGPT-created content, we invited six teachers (three native and three non-native English speakers) to evaluate and identify ChatGPT-generated versus human-made materials. Additionally, 83 students participated in judging the quality of the same materials. Results indicated that native English-speaking teachers (with 80% accuracy) were better at distinguishing ChatGPT content from human-made materials, compared to non-native English teachers (40%) and Japanese students (45%). Comments from five teachers and 72 students were analyzed using the qualitative data analysis software NVivo. The findings from the data led to three key conclusions: ChatGPT is a quicker and more effective tool for developing English learning materials; the quality of ChatGPT-developed materials is comparable to that of human-made resources, with learners often unable to tell the difference; both teachers and students raised concerns about the potential lack of personal feelings and monotony in ChatGPT-generated content, suggesting that such materials should be reviewed by human teachers.
This study explores the integration of generative AI, specifically ChatGPT, into the scriptwriting process for English as a Foreign Language (EFL) learners engaged in video production. Previous research indicates that video production in a second language (L2) enhances language learning (Lam & Yunus, 2023) and digital literacy (Lacy, 2018). Additionally, generative AI technologies have been shown to improve L2 learning capabilities (Ruan et al., 2021). This study combines these areas by investigating how ChatGPT can assist EFL learners in writing authentic dialogues. The participants were university students enrolled in a Content and Language Integrated Learning (CLIL) curriculum. They had diverse levels of experience in video production—mostly regarding social media. However, none had prior experience in scriptwriting or video production in English. The students had also never been trained in English on how to write prompts to ChatGPT. During the study, the participants were required to write their own dialogue for the video without the assistance of AI. Once they wrote the dialogue, the students were instructed on how to input commands to ChatGPT to provide feedback on their dialogue and more natural alternatives, contributing to more effective language learning and digital literacy development in the context of video production.
In an EFL environment, opportunities to interact with native speakers in English are often limited. However, the rise of generative AI has opened up new possibilities for EFL learners. Chatbots can now interact with learners in a more human-like manner. In this presentation, we will introduce a chatbot we developed: CoolE Bot. This English chatbot, powered by ChatGPT, integrates Text-to-Speech (TTS) and automatic speech recognition (ASR) functions, featuring design elements specifically tailored for EFL learners. CoolE Bot includes themes and tasks that provide guidance and context for student interactions. An example is the “Invited Talk” feature, where students select chatbots representing various professions. This allows them to engage in domain-specific conversations and to explore their career interests, enriching their learning experience.
CoolE Bot was developed over three phases. Phase 1 involved developing text and voice chatting, sentence translation, TTS playback, and AI-assisted responses. In Phase 2, we conducted a study where students used CoolE Bot, and we improved it based on feedback. Phrase 3 added new features such as adjustable TTS speed, response correction, and chat log downloads.
About the study in Phrase 2, five college students were invited to use CoolE Bot’s Invited Talk section through text and voice communication. They then completed evaluation questionnaires and received interviews to provide feedback. The initial results showed that CoolE Bot's responses were clear, and the interface was intuitive. They also identified areas for further improvements such as adding more chatbot roles, topics, and tasks, and refining the ASR mechanism to improve the overall user experience.
Many EFL and ESL teachers do not provide corrective feedback on L2 writing. Reasons include but are not limited to: 1. considerable time required; 2. unhelpful but necessary delay in returning corrective feedback; 3. concerns about comprehensibility of the feedback in the L2; and 4. concerns that students may simply disregard feedback. In recent years, automated corrective written feedback has become available in the form of commercial programs such as Grammarly and ProWritingAid. These programs potentially solve issues of long hours spent marking papers and the delay between students writing and receiving feedback. However, additional problems such as use of machine translation and copy-paste become an issue when moving from paper format to digital. And, crucially, programs such as the aforementioned usually offer feedback in the students’ L2, feedback that may not be comprehensible to all students.
With the advent of ChatGPT and other AI programs using large language models, it is now possible to provide automated written corrective feedback in the students’ L1. This presentation showcases results of an exploratory study using a ChatGPT-powered plugin currently in development for the popular Moodle LMS. The classroom intervention on which this study is based had four main steps: a) individual or collective brainstorming and online search for potentially useful vocabulary (3 minutes); b) subsequent free-writing activity in an online browser with copy and paste functions disabled (10 minutes); c) reading the ChatGPT feedback in L1 Japanese (2 minutes); d) completing paper grammar logs to process feedback (5 minutes).
Data has been compiled from students’ written submissions as well as the ChatGPT-generated feedback, students’ grammar logs, and survey instruments administered to students and teachers. The presenters aim to discuss concerns about accuracy and comprehensibility of the feedback in the L1, ease of use, what prompts work best, and student and teacher perspectives.
This presentation provides an update on developments of a new university-level English course with a high proficiency enrollment requirement (IELTS 7+). The course, which was initially designed before the emergence of tools like ChatGPT, made revisions in Year 1 in response to the university’s stance to encourage AI use as it is a university of science and technology. These course developments were first introduced in a JALTCALL 2023 presentation. This JALTCALL 2024 presentation will focus on current GenAI-related revisions in Year 2 and future changes.
The presentation will begin with a short overview of this academic literacy course. As an outcome-based course, a key expectation is for students to learn how to research topics using the university library’s electronic databases, to find and synthesize sources for academic writing and speaking tasks, and to incorporate research correctly (e.g., avoiding plagiarism, using lateral reading skills to verify sources, understanding how to cite and reference sources). The course thus focuses on activities that bridge gaps in students’ research experiences and with new issues raised by GenAI (e.g., hallucinated sources, using AI as a source or tool, considerations for citing and referencing). A few examples of lessons and activities will be shared.
After the overview, the presentation will continue with feedback from the course developers, teachers, and students. This will include self-reported GenAI use in post-assessment questionnaires by approximately 700 students. A summary of insights and experiences from teachers and students on incorporating GenAI in instruction and assessments will focus on points that may be useful to others developing similar courses or activities.
The session will end with a short discussion questioning what academic literacy skills are needed for our 21st-century learners conducting research online and using GenAI.
ChatGPT and other generative AI likely constitute the most significant technology in recent years to affect writing studies, and their impact has been widely discussed in public (Rudolph, Tan, & Tan, 2023). The technology is still too new and still too fast evolving to have any consensus on how to use it, how to manage it, how to regulate it, or how to guide its future development, but the proponents tout AI's potential to ease the tedium of writing, enhance the student's learning experience, and ease the teacher workload, while the critics fear its effect on plagiarism, human relationships, and unemployment, among others. These hopes and fears are both legitimate, and we need to continue public conversations about what to do with this technology because it will affect all of us, and each of us should have a say in forming a public consensus. This presentation seeks to clarify some of the issues in this debate by conducting a theoretical analysis of AI-generated text from a pragmatic perspective. "Writing" is not just about generating texts, and "writer" is not always the person who generated the text. Conceptual tools, such as speech act theory (Austin, 1975), performance theory (Goffman, 1959), and writer identity theory (Ivanič, 1998) can show us what specific aspects of writing are affected by AI, how AI tools change the nature of social action that results from it, how our understanding of "writing" must change as a result, and what it may mean to study and teach writing in the near future.
References
Austin, J. L. (1975). How to do things with words. Oxford University Press.
Goffman, E. (1959). The presentation of self in everyday life. Anchor.
Ivanič, R. (1998). Writing and identity: The discoursal construction of identity in academic writing. John Benjamins.
Rudolph, J., Tan, S., & Tan, S. (2023). War of the chatbots: Bard, Bing Chat, ChatGPT, Ernie and beyond. The new AI gold rush and its impact on higher education. Journal of Applied Learning and Teaching, 6(1), 364-389. https://doi.org/10.37074/jalt.2023.6.1.23
Recent advancements in Generative Pre-trained Transformers (GPT) have opened up new possibilities for language teaching and learning. Leveraging the Vygotskian concepts of the More Knowledgeable Other (MKO) and the Zone of Proximal Development (ZPD), this report presents an investigation of the viability of GPT-3 as an MKO for English Language Learners (ELLs). The study introduces the SpeechCoach web application (www.speechcoach.io), which integrates GPT-3 through its Application Programming Interface (API). The participants in this study (n=680) overwhelmingly rated the feedback given by SpeechCoach on their spoken transcripts as “very helpful”, indicating its effectiveness. Follow-up interviews with six participants further confirm that the feedback is particularly beneficial for intermediate-level adult English learners, assisting them in advancing within their ZPD. Although this research shows promise for the GPT-based SpeechCoach system, it requires further refinement, and it is important to acknowledge the limitations inherent in such systems.
The introduction of generative AI tools such as ChatGPT & Bing Chat has had a massive impact on education; especially in the field of language teaching & learning. GenAI tools can produce complete written texts which could potentially then be used to replace students’ own writing. This has led to a level of uncertainty of the best way forward; what exactly is the best way forward in light of these tools being available.
This presentation will share the experiences of a Hong Kong language center. In this context, the decision was made that GenAI tools would be embraced as a tool for teaching and learning. The creation of a self-directed GenAI Literacy module was made in light of these decisions to respond to three things
1) Preparing students with the basic knowledge and understanding of how to use these tools
2) Provide students with information about the risks and limitations of using these tools along with ethical usage
3) Ensuring equity and fairness amongst students i.e a level playing field
The presentation will describe the rationale, process, challenges and feedback from students/teachers about the module and hopes to show that even if GenAI tools are incorporated as part of a classroom that practical considerations need to be made in order for students to gain the most out of these tools.
College English classes in Japan often devolve into grammar translation and passive content. This does not improve students’ ability to communicate better in English. Developing active skills is more important and meaningful. As communication is essentially spoken, textbooks and excessive teacher talk can be major obstacles. They worked together to produce video recordings of live conversations to motivate students to be communicative, develop cooperation, and meet spontaneous and meaningful performance goals. A year-long course of approximately 20 classes, 50 hours in total, is outlined. In sets of two classes per topic, students talked freely about familiar and interesting themes in group sessions. They asked and answered questions, with the teacher only correcting major communication breakdowns to develop their independent interaction. In the next class, students chose related locations around the school to experience live, unscripted interactions, using a two-way interview style, where they could use both ideas from their previous practice and any new ideas that occurred to them. Topics covered a wide range, from nearby tourist spots and aspects of college life to talking about places they recommended in Japan and abroad. The teacher intervened only minimally when students froze up to scaffold continuous interactions. These recordings were edited into a single movie of the topic, captioned, and sent to students with their permission to review, enjoy, and keep both as memories of speaking English and to prepare for the oral course-end exam. Students thus learned to enjoy English communication and prepare themselves to talk to other speakers of English, whether native or not. Attendees will learn about the technology I use, how I build rapport with students, when I intervene, as well as my informal observations of student perceptions.
The benefits of VR for English education such as increasing engagement (Hu-Au & Lee, 2017), reducing affective filter (Schwienhorst, 2020), and public speaking anxiety (Godefridi et al., 2021), and raising students’ motivation (Tai, Chen, & Todd, 2020) have been reported. This presentation details a pilot study on the iTL VR café, focusing on the utilization of Virtual Reality (VR) for English practice. iTL means Information Technology and Law, which is a nickname for our department. Although the students at iTL hope to improve their speaking skills, the number of English classes offered at the department is limited. To find a solution, we decided to create a space in a VR platform, Spatial, where students can practice English with other students in a relaxed manner. Adopting the concept of Vygotsky’s Zone of Proximal Development (ZPD), proficient English-speaking university students led VR sessions to support their peers’ English-speaking practice. Eight students participated in six sessions in English practice in the VR platform. A speaking test was conducted before and after the six sessions. Also, pre and post questionnaires were collected. The results showed that the students were able to enjoy the English practice, be motivated in learning English, and lower their foreign language anxiety though no improvement was observed in the post speaking test. Referring to the potential of creating avatar-based learning spaces among students for autonomous learning and engagement (MEXT, 2020), we will explore the feasibility of implementing a VR café for English practice. This discussion will include its benefits, challenges, and potential, as reflected in the pilot study. Additionally, the presentation will touch upon future directions, such as expanding the study to include more participants and sessions, and the possibility of integrating telecollaboration with students at international universities into the iTL VR café.
Artificial Intelligence (AI) has now emerged as a transformative force in various sectors of our lives, including academia. Machine translation has developed to the point that few students now own an electronic dictionary in tertiary education which propels the use of MT and AI tools from web-based tools and smartphone applications. Hence, the majority of Japanese universities and colleges now have a need to develop and implement their own unique AI policies to enhance student education and research processes. This presentation shall disclose the commonalities of multiple institutional policies and summarize how they were created and rolled out by these institutions. With the ideal goal of inhibiting AI misuse by students in their submitted work and proactively discouraging the submission of work that would be considered as not the students' own. The analysis will show university data from each region of Japan (Hokkaido, Tohoku, Kanto, Chubu, Kinki, Chugoku, Shikoku, and Kyushu-Okinawa). While some institutions allow and encourage the use of artificial intelligence (AI) as a learning resource and production tool there remains the risk that some productions would not be considered as their own work and therefore a breach of academic integrity. Questions over this new User Experience will be presented for ethical consideration, and with implementation in mind on how best to adapt these AI policies. Audience members can expect to develop a better sense of the national ethical approach to usage, and some new ideas on both implementing policies and developing a robust student learning experience.
This presentation covers the technology and pedagogy behind MiMi, an AI-powered speaking assistant based on generative AI, speech recognition, and text-to-speech technologies. It will review the accuracy and efficacy of GenAI Assistants, including the pedagogy that underpins them, their impact on student motivation, and how they can be used to assess students’ language ability. The presentation will also include a review of data from deployments of EnglishCentral’s MiMi AI Tutor in over 20 universities with over 700 students in Japan this past fall. This data will encompass information on student motivation and how students' results align with CEFR-based 'CAN-DO' metrics.
Alan Schwartz is CEO & Founder of EnglishCentral. He started his career in Edtech as an English teacher in China over 25 years ago. After that, he spent over a decade in the AI tech industry, including as head of Nuance’s Mobile & Consumer division where he worked with Sony to develop one of the first mobile games using speech technology called Talkman. In 2009, with support from Google Ventures, he founded EnglishCentral, which has become one of the leading conversational platforms for learning English online.
This qualitative study examines the effectiveness of integrating ChatGPT, a large language model-based chatbot, into a first-year college academic writing class with five student participants. The students attended a lecture on ChatGPT usage and read an article on crafting effective prompts. Their reflections on using ChatGPT offer insights into its impact on their writing process and outcomes. These reflections highlight ChatGPT’s ability to enhance essay writing by improving clarity, consistency, and professionalism. Students noted the chatbot’s effectiveness in providing specific feedback on grammar, syntax, and content organization. They appreciated ChatGPT’s role in generating topic ideas, refining thesis statements, and proposing relevant examples. The findings suggest that ChatGPT contributes to skill development, particularly in incorporating sources, simplifying language, and strengthening arguments. Ultimately, students expressed their willingness to continue using AI tools for future writing tasks recognizing their effectiveness in enhancing writing quality.
This presentation explores the innovative use of ChatGPT in a research project that brought together Japanese, Indonesian, and Filipino university students for a one-day online conference focused on global issues. Faculty advisors worked with ChatGPT in preparing students for this event, providing language support and cultural insights, and assisting in the development of discussion topics and materials. The conference began with a 30-minute ice-breaking session centered on culture and interests, and then moved to two 40-minute student-led discussion sessions on global challenges. Post-conference, ChatGPT was instrumental in analyzing both quantitative and qualitative data from a follow-up questionnaire. This survey focused on participants' experiences and perceptions of each stage of the conference. ChatGPT’s advanced language processing capabilities were utilized to interpret open-ended responses, providing deeper insights into students' cultural understandings and engagement levels. This interactive talk will highlight how ChatGPT enhanced the overall preparation and analysis processes, contributing to a richer, more effective cross-cultural exchange. We will share findings on the impact of this AI-assisted approach on students' communication skills, cultural awareness, and comprehension of global issues. The session aims to showcase the potential of AI tools like ChatGPT in enriching international educational experiences and facilitating in-depth analysis of educational research data. Attendees will gain valuable insights into leveraging AI for enhancing global citizenship and intercultural collaboration in academic settings.
Spoken interactions, including turn-taking activities, discussions, task-based learning, and computer-supported collaborative learning, are common in foreign language learning classrooms. However, the neural mechanisms underpinning such foreign language exchanges remain insufficiently explored. Consequently, questions arise regarding the emergence of neural alignment between activity partners and its potential contribution to mutual understanding and the quality of communication. Advances in technology allow researchers to assess these interactions from a two-person perspective as opposed to the single person studies of the past. This presentation will begin by explaining the use of simultaneous dual-brain imaging – hyperscanning – to investigate the dynamic neural activity of two or more interacting individuals and briefly outline some of the recent research in this emerging field. Next, I will outline my own ongoing research using dual electroencephalography (EEG) recordings in an English as a foreign language context and share preliminary findings, including how machine and deep learning models show promise for analyzing dual-imaging data. Following this, I will look at what can be learned from functional brain connectivity patterns. Finally, I will address the implications and limitations of using this technology, along with ethical considerations as researchers investigate one of the last preserves of human privacy.
The ultimate goal of feedback in the writing process is to give students a chance to independently and critically analyze their own writing (Hyland & Hyland, 2006). The rise of generative artificial intelligence (AI) in classroom settings opens up the possibility of students using AI to ask questions about and receive feedback on their writing. However, how much students trust feedback from AI over their instructor, and to what extent is AI able to give quality feedback is yet unclear. This presentation will discuss students' perceptions of AI as a form of feedback within a writing class and seek to comprehend how these perceptions shift throughout the semester as students engage with AI-generated feedback. At the start and end of the semester, students were surveyed on their beliefs about AI’s applicability inside the classroom, views on the accuracy of AI responses, and their attitudes about feedback. Students were tasked with utilizing any AI tool for feedback on their essay drafts. Reflection journals were used as a record of the specifics of their questions used to seek feedback and writing choices. Additionally, students have chances in class to seek feedback from their instructors or peers, and the differences in their perceptions of these processes have been surveyed. The outcomes offer insight into students' changing attitudes, knowledge, and practical use of AI in writing education. By examining the changes in students' perspectives over time, the presentation explores integrating AI into the writing classroom as a supportive tool for garnering feedback. The findings provide nuanced insights into learners' experiences, emphasizing the potential impact of AI on writing education and contributing to the ongoing discourse on the role of technology in the learning environment.
Hyland, K., & Hyland, F. (2006). Feedback on second language students’ writing. Language Teaching, 39(2), 83–101. https://doi.org/10.1017/s0261444806003399
“[T]he entire ELT community needs to see any threats to individual well-being as a collective community issue” (Mercer, 2021, p.19). However, in this day and age, the extent to which language teachers are prepared to handle AI and sustain their mental well-being still needs more insight. With that in mind, a case study focusing on the context of Vietnamese higher education was conducted to examine how important support, technology in particular, is to language teachers at the tertiary level so that they are more capable of handling AI and safeguarding their wellbeing. Semi-structured interviews were conducted with Vietnamese teachers of English at the tertiary level. Transcribed data were analyzed using the constructivist grounded theory approach (Charmaz, 2006). In this presentation, the theoretical framework encompassing the study is discussed together with the detailed procedure of the data analysis. Findings suggest that the teachers indeed did not receive sufficient technological support and training from their institutions, which eventually could make them feel more resentful toward the potential of AI and affect their mentality exponentially. Therefore, it is hoped that the presentation could provide suggestions to both institutions and language teachers on how to utilize the advantages that AI brings about and hence, sustain their well-being. As such, while it is highly recommended that institutions provide fundamental technological support to their teachers, teachers should not rely solely on that assistance but take the necessary initiatives to develop their skills and teaching practice.
References
Mercer, S. (2021). An agenda for well-being in ELT: An ecological perspective. ELT Journal, 75(1), 14–21. https://doi:10.1093/elt/ccaa062
Charmaz, K (2006). Constructing grounded theory (2nd ed.). SAGE Publications.
Recently, machine translation (MT) services such as Google Translate and DeepL, and generative AI tools like ChatGPT and Google Bard have become a topic of interest for many language teachers. Some teachers are concerned about threats to academic integrity, while others are interested in the educational potential of such tools and how best to effectively harness them for learning. In this presentation, we explore how teachers are addressing the use of MT and generative AI in the classroom in Japan and how they believe these tools will affect the language teaching profession.
A pilot survey was initially conducted regarding how teachers address the use of MT and generative AI in their classrooms (N = 32). Out of the university English instructors in Japan who responded to the initial pilot survey, 9 instructors were interviewed. Interviewees shared their: teaching context; approaches to addressing MT & generative AI use in their courses, and; thoughts on the future of MT/AI and the language teaching profession.
The interviewees instructional approaches included: increasing monitoring of student coursework, "re-walling the classroom" with group work, paper and pencil assessments, interviewing students about the content of their writing assignments, impromptu assessments, and strengthening the process writing approach. Other teachers fully embraced the technologies in their courses- teaching students about effective and ineffective uses of MT/AI, or encouraging their critical use in certain steps in the writing process. Others avoided the use of them altogether.
The presenters will explain the various interviewees' approaches towards MT and generative AI in the language classroom. Some thoughts about the future of these technologies in the language teaching profession will also be shared.
JALT Intercultural Communication in Language Education (ICLE) SIG Presentation
The advent of widely accessible generative AI tools such as ChatGPT has taken foreign language education by storm; fueling a raging debate regarding the possibilities, limitations and even the adequacy of using AI for language learning. Under this context, it could be said that there is one particular area that epitomizes AI’s inherent inadequacies: the intercultural experience of learning a language. Due to the fact that language and culture are part of an indivisible unit (the Sapir-Whorf hypothesis), becoming proficient in a foreign language requires much more than just learning grammar, spelling, vocabulary, pronunciation, etc; it also requires a deep understanding of the cultural underpinnings of it. Hence, common sense dictates that AI is hardly an adequate tool for this purpose ... or is it? In this presentation, I will argue that the answer to the question of whether AI can be used for teaching cross-cultural communicative skills largely depends on the position a given language teacher assumes regarding defining (a) what is intercultural communication (IC) in the first place, and (b) what is the role of IC in foreign language education in the second. In this sense, the presentation will be structured in two major parts. Firstly, leading theoretical approaches on IC (in particular, Bennett’s DMIS, Byram’s MICC, Moran’s Cultural Knowings Framework, Van Dyne’s Four Factor CQ Model, and Shaule’s DMLL) will be analyzed in terms of how they could potentially allocate (or even preclude) a role for AI in language learning. Secondly, existing examples of contemporary AI tool usage in IC education settings will be expounded as a means to exemplify the present possibilities and limitations of AI in IC. Last but not least, the presentation will turn its eye to the possible future developments in AI technology that could upend our current answers to the question that guides this presentation.
Modern English language education is undergoing a transformative shift, embracing a holistic approach that nurtures 21st-century competencies such as integrated communication fluency and critical thinking. This evolution process necessitates a comprehensive reskilling and upskilling of language instructors, a challenge that this innovative pedagogical framework addresses. Motivated by the strategic synergy exemplified by Shohei Ohtani’s dual mastery in baseball, this framework for an “English for Academic Purposes” course combines Artificial Intelligence (AI) advancements with the authentic intelligence (ai) of human expertise.
This framework features essential AI-driven components to support this pedagogical transformation: Adaptive Learning & Feedback, employing algorithms to tailor linguistic content to suit individual learner needs and provide immediate, personalized feedback; Immersive Communication Tools, creating authentic practice environments to promote fluency and confidence in language learning; Data-Driven Progress Tracking, analyzing learner performance to customize learning paths and tracking progress towards language competencies; and Content Creation & Resource Curation, generating and selecting appropriate, relevant, and engaging materials tailored for a given group of learners.
Complementing these technological advancements, human elements are crucial for fostering an authentic and stimulating learning environment. Expert Facilitation & Contextual Feedback involve instructors guiding students and offering deep, qualitative feedback on complex skills for real-world language application, while Collaborative Professionalism & Intercultural Engagement encourage professional conversations and intercultural collaboration through human-led tasks. Creative Insight & Critical Thinking are achieved through tasks or projects that go beyond traditional textbooks, encouraging the practical application of language skills. Acknowledging AI’s limitations, Ethical Considerations & Social Responsibility ensure human instructors address key issues to 21st-century learners, like data privacy, algorithmic bias, and responsible digital communication practices.
Preparing for an engaging, effective, and ethically sound teaching and learning pathway, this framework is intended to revolutionize English language education by balancing AI and AI in order to meet the diverse needs of contemporary learners. This signifies a substantial departure from specialized skill instruction and toward the comprehensive development of vital proficiencies, aided by cutting-edge technology and augmented by visionary language instructors.
This study brings together ELF learners from Taiwan and Poland, tasking them with co-creating virtual reality tours, to determine the effects of this type of engagement on intercultural learning, motivation, and multimodal literacies. Eight graduate students drawn from a TEFL methodology seminar at a national university in northern Taiwan were paired with an equal number of MA pre-service teachers from a public university in Poland. During the 12-week interaction, participants communicated weekly through Zoom to complete three-stage telecollaborative tasks: information exchange on scenic spots with cultural significance of potential interest to international partners; comparison and contrast of one selected tour site from Taiwan and one from Poland; and co-creation of virtual reality tours of the sites using VR software. Results showed that participants learned about the target culture, increased interest in it, and gained appreciation for cultural differences. They also became more confident in expressing themselves through authentic interaction. The co-construction process enhanced their motivation for learning, improved intercultural competence, and cultivated digital literacies. The study serves as a blueprint for integrating virtual reality technology into intercultural exchange projects.
Virtual exchanges and COIL (Collaborative Online International Learning) allow participants to communicate with peers around the world, an experience that can be highly motivating for students. Virtual exchanges often get off to an exciting start when students first meet their counterparts from another country. However, the exchanges soon start to suffer due to a lack of content that would foster engaging discussions. Extensive Reading provides students with input, but many students lack motivation to read. Combining the two can be a recipe for success. In this session, the presenter will discuss a proposed virtual exchange in which, each week, students prepare for a meeting with their partner in another country by reading a graded reader about a different aspect of their partner’s country. Example topics are school life, pop culture, holidays, traditional food, dating, etc. In addition to reading about a topic, students can prepare questions about points in the reading that they are curious about or did not understand. Entering the meeting with detailed knowledge of the topic and some related questions can result in more engaging interaction. Furthermore, the content of the graded readers could provide the basis for a final project between participants, such as a presentation.
Virtual Exchanges (VEs) promote language acquisition and increase communication confidence, self-perceived communication competence (SPCC), digital literacy, cross-cultural understanding, collaboration skills, and intercultural competence. Current literature has yet to explore the effect of time spent on VE tasks and the complexity of those tasks as they relate to students' SPCC, communication anxiety, or self-efficacy. This presentation compares a long and short VE held a year apart with similar students. The first VE was an eight-week collaborative online international learning (COIL) project with students from Japan, Spain, and Türkiye. The second VE was less than three weeks with students from Japan and Korea. The COIL required English as a lingua franca to complete reports and a video on implementing SDGs at school. The shorter VE also used English as a lingua franca but required students to create two Flip videos and hold a short synchronous conversation. Students completed a pre- and post-VE survey containing Likert-type statements and open-ended questions. In both cases, students saw an increased self-efficacy and SPCC and a decrease in communication anxiety. However, the changes were more pronounced in the COIL project. These results highlight the value of VEs, even if they are short. VEs allow students to engage with student peers from other cultures and can change perceptions of language learning to one that is communicative rather than academic. Not all students benefitted equally from the VEs. This presentation will suggest task types and VE length based on student profiles.
This study aims to develop, employ, and evaluate an immersive virtual-reality (VR) campus tailored for novice students learning Chinese as a Foreign Language (CFL) at a Taiwanese university. The primary goal is to equip incoming international students with essential Chinese language skills critical for successfully adapting to the academic environment. Incorporating an inventive blend of Content and Language Integrated Learning (CLIL) and Task-based Language Teaching (TBLT), the curriculum emphasizes immersive exposure to the Chinese language within task-oriented scenarios, complemented by instructional guidance in English. Utilizing VR technology, this platform, crafted using Gather Town, seamlessly merges physical and digital elements to augment comprehension and motivation among early-stage language learners. The active tracking feature allows for personalized learning paces, aligning with the principles of Education 4.0.
The study documents the development, implementation, and evaluation of the VR campus, founded on a detailed needs analysis involving 18 CFL students. This comprehensive curriculum integrates survival skills, language proficiency, cultural insights, and communication competencies across eight units. Each unit encompasses introductions to school buildings, essential vocabulary, daily conversation and sentence structures, practical tasks, and conclusive assessments. Furthermore, user feedback from a trial involving six CFL students highlights the interactive nature of the learning environment while addressing concerns about task complexity, platform guidance, and future improvements.
Feedback derived from six CFL students, exhibiting varying levels of Mandarin proficiency, underscores the interactive and enjoyable nature of the learning environment. While acknowledging the platform’s ease of use for self-study, students expressed concerns about task difficulty, platform instructions, and future updates. By integrating user and expert feedback, this study aims to present a comprehensive review of the VR application, potentially revolutionizing CFL teaching methods while adhering to Education 4.0 principles.
Traditional classroom-based language teaching often lacks context and natural elements crucial for effective language acquisition. Although research has shown the importance of turn-taking, disfluency, embodiment, and gesture use in natural conversation, gathering quantitative data to identify patterns has been challenging for language teachers. Previous research relied on annotating video stills, which is informative but not quantitative, making it difficult to understand significant differences in gestures over temporal units of conversation. Recent AI advances have led to tools that allow teachers to document and analyze situated language use. This presentation introduces a web application developed by researchers in the Yamada Lab at the National Institute of Technology, Hakodate College, enabling amateur researchers to apply pose detection to their research. The application outputs a CSV file for statistical analysis, adding rigor and identifying typological differences in gesture use. The presentation will demo a case study observing iconic gestures in online VR language exchanges and provide participants with hands-on experience using a Quest 3 headset and the pose-detection app developed by Yamada-Lab. By raising awareness of authentic gesture use and embodiment in speech, we can develop more effective language materials that improve students' communication skills and help to move language teaching forward.
The MAVR Immersive Learning Showcase at the JALT CALL Conference 2024, taking place at Meijo University Dome Mae Campus, Nagoya from May 17th to 19th, represents a critical junction of pioneering insights and breakthroughs in mixed, augmented, and virtual realities, specifically tailored for language learning.
This event mirrors the MAVR Special Interest Group's commitment to advancing educational technologies. It underscores recent progress and strategic pathways for embedding immersive experiences alongside AI-driven approaches in language learning, aligning with JALT CALL's theme of exploring the potential of AI in enhancing language education methodologies.
From the State of MAVR Address by Eric Hawkinson, extending through interactive workshops and research reports by Shari Eunice San Pablo, Roberto Figueroa Jr., Florinda Amparo Palma-Gil, and Mehrasa Alizadeh, the forum highlights the critical roles of diversity, inclusion, and collaborative exploration in crafting immersive learning environments, with discussions related to artificial intelligence. This inclusion of AI enriches the discourse, underscoring its potential to revolutionize immersive learning by enhancing personalized learning experiences, automating feedback, and facilitating deeper engagement with language content.
The roundtable discussion broadens this conversation further, encouraging all attendees to share insights and ideas, contributing to a shared vision for future research and practical applications in the field that include AI as a pivotal element.
This event serves as a powerful illustration of MAVR’s dedication to advancing educational practices through technological innovation, including AI, thereby nurturing a community of collaboration, innovation, and inquiry.
Presentation 1: State of MAVR Address 2024
Eric Hawkinson, MAVR President, Kyoto University of Foreign Studies
The 2024 State of MAVR Address provides a comprehensive overview of the Mixed, Augmented, and Virtual Realities (MAVR) Special Interest Group's recent advancements, key achievements, and strategic direction within the domain of technology-enhanced language learning. It emphasizes the organization's adherence to principles of diversity, inclusion, and respectful engagement, aligning with the JALT Code of Conduct. The address delineates MAVR's contributions to fostering immersive learning experiences through augmented and virtual realities. It also outlines the potential of AI’s convergence to these frontier technologies. Furthermore, it outlines ongoing initiatives to support and expand language education, including the solicitation of contributions to the MAVR Journal and the facilitation of the "MAVR in the Metaverse" event series. This discourse serves to articulate MAVR's continued commitment to innovating at the nexus of educational technology and language instruction, while setting a forward-looking agenda for research and community engagement.
Presentation 2: Lessons and Exhibits in the Metaverse
Mehrasa Alizadeh, Shari Eunice San Pablo, and Roberto Figueroa Jr. will teach participants how to use FRAME (https://framevr.io/) to create VR-based language learning lessons and exhibits. The presenters will describe different cases on how the metaverse was used including how they delivered online EFL flipped lessons or facilitated global exchanges through virtual exhibits using FRAME. Web-based VR platform FRAME lets people access the space using their computer or phone without headgear. The platform can host varied numbers of students based on the subscription plan in desert islands, conference rooms, and exotic gardens. Students build avatars and can communicate in groups or use private voice zones for smaller groups or pair work after entering the space.
Presentation 3: Virtualizing Campus Tours and Developing VR Tours for Language Education in Blended Learning Contexts. Roberto Figueroa Jr. and Florinda Palma Gil will teach participants how to use various VR Tour platforms like Kuula, H5P, and 3D Vista. They will also share how they were able to use these VR tours for teaching Filipino Language Education in online and blended contexts as well as for promoting their university through virtual campus tours.
This study aims to investigate using digital learning resources, specifically the Adaptive English Learning Platform, to enhance English reading skills and motivation among elementary school students. With the increasing prominence of English as a global language and the challenges faced in primary-level English reading skills, this study aims to explore the practical impact of digital learning on language proficiency and motivation. The Adaptive English Learning Platform, developed by a collaborative team of educational scholars, AI experts, and teachers, has garnered attention for its innovative approach to adaptive teaching. The study's motivation stems from the platform's unique features, its international recognition, proximity to the school's curriculum, and its potential contribution to more intelligent and personalized educational methods. The research objectives encompass evaluating the impact of digital learning on English reading achievement, understanding students' motivation in digital environments, comparing traditional teaching with digital learning, and providing policy references for educational institutions. The research questions focus on improving English reading ability and motivation after using the digital learning platform. The scope of the study involves third-grade students at an elementary school in northern Taiwan, including a background investigation, pre-test and post-test assessments, and an analysis of platform usage. The study anticipates challenges related to individual differences, external factors, and platform usage, with ethical considerations guiding participant consent and data privacy. This study is expected to contribute by providing insights into the English learning environment, evaluating the effectiveness of the Adaptive Learning Platform, analyzing students' usage behavior, and offering empirical references for educational policies.
The integration of Artificial Intelligence (AI) in educational settings has opened new avenues for enhancing learning experiences. This study focuses on understanding students' perceptions of using AI, specifically ChatGPT (3.5 version), to provide feedback on writing performance in a Content and Language Integrated Learning (CLIL) course. The course, centered on Data Description and Analysis, required students to write weekly paragraphs on business-related data topics. Our methodology involved students maintaining a Writing Journal. Each week, they were given a prompt with specific criteria to assess. Students used ChatGPT to insert these prompts and their draft paragraphs, receiving feedback on adherence to the criteria and corrections for any grammatical or spelling errors. Based on the AI's feedback, students revised their drafts for their final weekly submissions. After 14 weeks of this process, students completed a survey evaluating their experience, particularly in terms of writing skill improvement and task completion efficiency. The results revealed that students generally perceived the AI-assisted method as beneficial for enhancing their writing skills and improving their performance in completing assignment tasks. They appreciated the immediacy and specificity of the feedback. However, there were notable concerns regarding the complexity and relevance of the feedback provided. Some students found the AI-generated advice to be convoluted, challenging to comprehend, or occasionally off-topic. Despite these issues, the overall response was positive, with students valuing the innovative approach to learning and its impact on their writing proficiency. This study underscores the potential of AI tools like ChatGPT in educational contexts, particularly in enhancing writing skills. It also highlights the need for improvements in AI feedback mechanisms to ensure clarity and relevance, ensuring that such tools are more effectively integrated into learning environments.
The study explores the potential benefits and drawbacks for EFL business writing teachers planning to adopt ChatGPT, a popular chatbot built on a transformer-based language model known as a generative pre-trained transformer (GPT). It reports an 18-week business writing course of 20 EFL writers in the Fall 2023 semester at an English department at a public technological university in Taiwan. This study examines what ChatGPT can and cannot help in preparing students to perform for workplace communications, and how they value ChatGPT, and potential growth and challenges observed from this ChatGPT-enhanced course. Altogether 9 business genres, i.e., resume, short-bio, cover letter, inquiry, reply, order, complaint, recommendation, business report, were taught. Two business writing professionals were invited to evaluate business documents produced by the students with the help of ChatGPT.
The data include the evaluation from the 2 writing professionals, interviews with the students and the class instructor, focusing on their feedbacks in adopting ChatGPT. The findings suggest that ChatGPT tended to function relatively efficient when producing documents of pre-fabricated business genres, such as cover letter, inquiry, reply, order, and complaint but it failed to provide innovation and uniqueness anticipated in resume, short-bio, and business report. ChatGPT may seem to have saved drafting time for business writers as it automates routine documents. Nevertheless, before deploying ChatGPT, business writing teachers and students must carefully analyze its strengths and restrictions. ChatGPT requires training particular geared toward business domain and might produce naive and erroneous judgements, needing in-depth post-draft revision.
During the years of 2020 to 2023, many teachers were forced into online-only classes which created both a burden to teachers and opportunities to expand our teaching repertoire to integrate more online resources and activities. This presentation will present a number of online methods and techniques to help students learn about, obtain, and engage in critical thinking. From detailed critiquing and research-based activities to easy introductory activities for critical thinking, this presentation will share methodology that has been used in the presenter’s classes across all levels of language learners. Deconstruction and supplementation of textbook-based activities will also be detailed. Participants will come away with ideas and handouts that can be used in their classroom and to help as a starting point to develop their own activities. The presenter will also discuss ideas for specific use cases from attendees during the question-and-answer section of the presentation.
Targeted at EFL professionals tasked with conference management, this interactive presentation outlines a streamlined, tested approach to deploying and successfully managing a call for papers on the Indico conference management platform. From initial setup to the successful execution of each stage of a CfP, participants will learn how to leverage Indico’s comprehensive suite of tools. The session will cover key features such as event creation, abstract submission guidelines, review process configuration, and communication with participants. Attendees will leave with actionable insights and best practices for using Indico to enhance the educational impact and reach of their EFL conferences, making the most of this powerful platform to foster academic excellence and collaboration.
This pilot study examines the efficacy of an adaptive CLIL science reading platform (ACSRP) for facilitating personalized learning among fifth graders in an after-school International Education program in Taiwan. Following the design-based research paradigm, which consists of grounding, conjecturing, iterating, and reflecting (Hoadley & Campos, 2022), the researchers first analyzed learning needs of fifth graders enrolled in a content-language integrated learning (CLIL) science course in Taiwan. Based on the identified learner needs, the researchers created balanced reading tasks with three adaptive mechanisms on the platform to accommodate for learners’ individual (a) reading proficiency, (b) multimodal preference, and (c) word knowledge. The balanced reading tasks aimed to improve target learners’ science reading skills with four different task types – (a) Recognizing Word Meaning, (b) Sentence Structuring, (c) Text Comprehension, and (d) Reading Fluency (realized through read-aloud tasks). The platform includes pre- and post-reading tests, multimodal balanced reading tasks where learners can choose their preferred learning mode (visual, aural, or linguistic modes), a scoreboard, and post-reading games focusing on vocabulary. The reading topics include soil, plants, and forces. The reading platform was developed on Unity and was refined after several trials over a six-month period. The pilot study in real classroom settings took place in two subsequent semesters with an experimental group and a control group. The adaptive mechanisms were evaluated for their efficacy based on researcher observations and a student survey. In the first semester, the first two mechanisms were evaluated for their efficacy. Then, in the second semester, all three mechanisms were tested, with learner survey collected after the use of ACSRP. Survey results show that target learners are motivated by using the platform to engage in reading about science by the multimodal balanced reading tasks. Their reading skills also improve with the use of the platform.
As the landscape of generative AI continues to evolve, educators have been exploring its potential applications in teaching, especially its ability to generate presentations and visual aids for teaching. This workshop offers participants a unique opportunity to delve into the practical aspects of leveraging various AI platforms and tools to generate images and presentations. Through hands-on demonstrations and guided instruction, attendees will gain insights into harnessing the power of AI to develop visually engaging materials conducive to effective teaching.
The use of automated writing evaluation (AWE) in second language (L2) writing contexts has increased dramatically, as evidenced by the large body of research published on the topic over the past decade (e.g., Li, 2021; Link et al., 2022; Zhang, 2020). Considering this, several systematic reviews on AWE have been published (Ding & Zou, 2023; Ngo et al., 2022; Zhai & Ma, 2022). Nevertheless, none of these review studies has specifically focused on the use of Grammarly in L2 writing contexts. Grammarly is a popular intelligent writing assistant that offers real-time automated corrective feedback while writing, unlike other well-researched AWE systems such as Criterion and Pigai. Over the past several years, many studies have examined the use of Grammarly in L2 writing settings (e.g., Barrot, 2023; Dizon & Gayed, 2021; Ranalli, 2021; Yousofi, 2022). Yet, as aforementioned, no systematic reviews have exclusively focused on the use of Grammarly. Considering this research gap, this presentation reports on a systematic review study that investigates the specific trends, methodologies, and results in Grammarly research involving L2 learners. Two major databases (Web of Science and Scopus) were used to search for relevant peer-reviewed literature published between 2009 and 2023. A total of 25 studies met the inclusion criteria and thus were included in the analysis. Thematic analysis of these studies is still ongoing and will be reported during the presentation. Implications of the findings for pedagogy and research will also be discussed.
As attention spans vary individually and are influenced by different factors, the cultivation of active participation and engagement among students remains one of the paramount goals for educators. This workshop is designed for all levels of Quizizz users, and it explores the innovative use of Quizizz, an online learning platform suitable for creating dynamic and participatory learning experiences.
Beginning with an exploration of Quizizz's key features, including its gamified elements, real-time feedback, and accessibility across various devices, the workshop provides a comprehensive overview. Participants: all levels of Quizizz proficiency, will learn to design interactive questions enriched with multimedia elements. Practical examples from diverse language learning settings will illustrate the versatility of these features. Emphasis will be placed on utilizing the platform to cater to diverse learning styles and creating a more inclusive educational environment.
Notably, participants will experiment with Quizizz's latest feature: AI assistance, recognizing the significance of staying current with the prevailing trend in education. This addition enhances the platform's adaptability for various learning styles, emphasizing the importance for educators to fully take advantage of the potential of AI in their teaching practices.
Through hands-on activities, attendees will acquire practical insights into leveraging Quizizz to create engaging and interactive activities that captivate students' attention. The workshop aims to empower educators with the skills to utilize Quizizz as a potent tool for formative assessment and continuous feedback, fostering active learning and collaboration in the classroom.
By the end of the workshop, participants will have the knowledge and resources to confidently integrate Quizizz into their teaching practices to enhance student engagement and effectively prepare students for success in their language learning journeys.
Do students even need real teachers anymore? That is a question floating around Silicon Valley in light of recent advancements in AI. Are they caught up in the hype, or is there a foreseeable path to AI teachers?
The present state of LLMs leaves a lot to be desired for advanced applications designed to embody intelligent language teachers. We cannot simply prompt our way to a solution with existing models. A broad-spectrum approach combining technology across machine learning disciplines is imperative in building rich experiences for students. Essential for this mission is an ocean of quality data - LLMs for language learning require specific data that captures the language learning experience.
ImmersiveAI is a startup attempting to develop an AI language tutor. In doing so, our company is pushing the limits of AI and assessing the learning outcomes for 20,000 language learners. ImmersiveAI has developed key insights into exactly how AI needs to improve before it can replace a tutor and where it already excels. Currently, AI enables the mass proliferation of learning techniques centered around comprehensible input and conversational interaction, as demonstrated in our application, Crosstalk. Advanced text-to-speech and prompt-imbued personalities keep students engaged, whereas earlier chatbots felt like talking to a dull robot.
Personalized level-matching tailors the learning experience to each student, thereby facilitating greater understanding. Contextual awareness enables improved error correction and translation. Even though intelligence capable of replacing human tutors does not yet exist, student outcomes at this early stage are promising.
Language learning is transformative for the individual and society at large. Very few students have access to a private tutor, and many students rarely have the opportunity to practice with anyone at all. If everyone had a private language tutor in their pocket available 24/7/365, what would the world look like? How do we get there?
Recently, educators have started to leverage large language models like ChatGPT for Japanese language education. In this study, I created a ChatGPT-like chatbot to output voiced character language for Japanese learning. Character language captures the unique speech patterns of fictional characters, influenced by their age, gender, personality, and environment. These patterns are especially present in anime. For example, samurai characters tend to use de gozaru instead of the modern auxiliary verb da. Wise female characters prefer the archaic first-person pronoun warawa over the modern watashi and use the noja expression to end sentences instead of noda. Understanding that language acquisition extends beyond reading and writing to include listening and speaking, I gave the character chatbots AI-generated voices fine-tuned to their specific character. This not only makes the lessons more engaging, but also provides a comprehensive language learning tool. Through interaction with voiced character language chatbots, Japanese learners acquire speech patterns that they cannot learn through traditional mediums like textbooks and mobile applications. In addition, while learning Japanese is a highly difficult and daunting task, these chatbots are engaging and provide a fun way to learn Japanese. I will evaluate the effectiveness of these voiced character language chatbots in a mobile application.
Generative AI has recently saturated various language learning platforms and applications, such as Duolingo, English Central, Elsa. Integration of features powered by generative AI conveys potential advantage for language learners, suggesting individualized feedback and human-like interaction. However, the exact benefits of generative AI features for language learners are yet to be confirmed.
The following study examines the use of a generative AI chatbot, Replika, in text-based English language conversation practice, and analyzes whether teacher’s corrective feedback for said conversations with the chatbot affects writing skill of English language learners (ELL). The participants are comprised of 34 University-level Japanese ELL, with the experimental group (n = 17) having conversations with the chatbot Replika and receiving corrective feedback from the teacher, and control group (n = 17) practicing English conversation with the chatbot Replika only. Writing skill is evaluated through select CAF indices (average sentence length, measures of lexical diversity voc-D and MTLD, spelling errors per 100 words, number of tokens), which are measured in pre- and post-test writing tasks with Text Inspector, a tool for text analysis. Participants were also asked to provide feedback and impressions at the end of the experiment.
These results show a significant improvement in all six CAF indices regardless of corrective feedback from the teacher. However, teacher’s feedback seems to be particularly effective in the development of lexical diversity measured by indices voc-D and MTLD. The results suggest that text-based conversations with generative AI chatbot, Replika, potentially allow adult ELL to improve some aspects of their writing (number of words in a sentence, diversity of vocabulary, spelling errors), even without corrective feedback from the teacher. However, lexical diversity is improved more in the presence of corrective feedback. The study offers a case in favor of the use of generative AI for writing.
Generative AI (GenAI) is rapidly advancing language education by providing innovative learning tools. Understanding its adoption within academic contexts, particularly among university students in Japan, is vital to fostering the integration of GenAI in language learning courses and realizing its full educational potential. The research described in this presentation explores how students in Japan are adopting GenAI tools such as ChatGPT, reflecting on their current and anticipated use, and addressing underlying concerns about accuracy and ethical use. Results from two surveys, conducted in October 2023 and April 2024, will be analyzed to compare changes in students’ perceptions and usage of GenAI.
In the initial survey of 101 first and second-year university students in Japan, 94.8% were aware of generative AI (GenAI) like ChatGPT, yet 60.8% weren’t using it for study, sighting concerns over accuracy (47.3%) and ethics (31.7%) as main deterrents. Those who were using it did so for research (70%), idea generation (47.5%), and writing (35%). Despite these hesitations, 68% expect to use GenAI in the future, foreseeing benefits in time-saving and research efficiency. The initial survey pointed to a gap between awareness and practical use, and a hopeful outlook for future integration.
The follow-up survey hopes to uncover any significant changes in the use and perception of GenAI relating to study habits and whether the concerns over accuracy and ethics have been alleviated. The anticipated comparative analysis will be particularly beneficial for educators, curriculum developers, educational policymakers, and AI technologists who are at the forefront of integrating cutting-edge AI tools into learning and teaching methodologies. Participants will gain invaluable insights into the evolving student attitudes toward GenAI, which can inform the development of more effective, ethically sound educational practices and AI applications.
Extensive reading (ER) means reading a lot of texts that are interesting and easy to understand. Artificial Intelligence (AI) is an ambiguous term but often refers to machine learning techniques and large language models (LLMs), where recent developments create a possibility for the automatic production of texts ideally suited to a language learner for ER. In addition, we may learn useful lessons about the language learning process by considering the way machines have become proficient over the past few months. What can large language models teach us about learning languages? How can learners use this technology? And how does this industrial revolution technology change motivations and requirements for language learning and education in general?
AI Generated Imagery (AIGI) produces racially biased images and the implications of such biases are problematic because proliferation of such imagery can amplify stereotypes. However, little research has been done into if L2 students are able to identify and articulate about these biases. Furthermore, because current AIGA is text based, there is a crucial language component that can be explored by L2 students. This presentation introduces research conducted at a prominent Japanese Art university into how English as L2 students identified AIGI, used text prompted technology to generate images and finally reflected on inherent biases produced. The research has implications for correcting and diversifying data sets.
20 participants of an intermediate/advanced English Communication course (L2) in the school of Creative Innovation at a leading Art and Design university in Japan explored text-generated AI images and their implications over the course of 15 weeks. Students engaged with AI generated imagery by creating it, discussing it, and answering questions about it. In the concluding weeks, students were tasked with creating prompts and were given sample prompts in class and asked to reflect on the output.
Images were biased in terms of gender and race. Students were asked to comment on these bias. In the images created based on the prompt “Leading doctor” students recognized that 72% images were of a middle aged men with glasses and 100% of the images were light-skinned. The images resulting from “leading scientist” generated similar results. When prompts included racial characteristics (Japanese/Chinese), the program revealed extreme stereotyping. Skin-tone bias was also prevalent in the photo-realistic images when prompts included income level or job position. While some students concluded that the images were based on reality, others identified the need for data sets to be trained with more diversity reflective of reality.
Generative AI can be intimidating for students and instructors, but with some guidance and practice, it can become a valuable (and even fun) tool. Ideally, it will provide a wider range of easy-to-understand information and give suggestions to learners who may not know where to begin. Being a new technology, there are still many who are reluctant to use it. In addition, perhaps the challenging aspect for students is how to formulate useful prompts that elicit information English language learners can comprehend.
This presentation will review the results of a small study on having students in a Business English class use AI to prepare 3-minute presentations on various companies. One group (the control) was given an assignment and no instructions related to AI research. The other group was given the same assignment but a 15-minute demonstration of using generative AI to gather comprehensible information. The results of both groups were then analyzed in terms of content and understandability, and a student survey regarding their overall impressions of AI was conducted. While the differences between the two groups were not substantial, with more guidance and improvements in AI services, the potential for improving student work exists.
After considering the strengths and weaknesses of the exercise, follow-up activities have been planned to enhance AI usage further. The overall aim of using AI is to get students to do more research in the target language (English) instead of collecting information in Japanese and then using translation software to produce content they and their classmates cannot understand. To this end, the presenter will offer examples of how Microsoft Bing Copilot, Google Gemini (Bard), and ChatGPT differ.
This presentation offers a comprehensive view of our experiences with AI integration in university language teaching, our strategic approach, and envisioned developments in competency-based language education, primarily focusing on STEM and Management students at the Language Center of the Technical University of Munich, Germany.
Our AI integration approach prioritizes preparing students for their professional journey by actively utilizing electronic text optimization tools in assessments. These challenges aim to encourage students to identify and address AI-generated weaknesses, fostering critical thinking and adaptability to evolving AI technologies.
Central to our strategy is the understanding and effective use of AI-based technologies, particularly ChatGPT. Since autumn 2022, we have been continuously collaborating with colleagues and AI-savvy teaching experts, actively refining our competencies through experimental implementation of ChatGPT in teaching. By integrating ChatGPT into Moodle, we aim to provide automated support, making language learning accessible and personalized for our students.
While embracing AI, we are committed to updating curricula to equip students with the competence to use these tools thoughtfully. Our focus lies in nurturing critical thinking and collaboration during language and knowledge engagement.
In conclusion, this abstract illuminates our experiences with AI integration, proactive strategies, and commitment to harnessing AI's potential. We aim to empower STEM and Management students to confidently navigate AI-driven language learning, envisioning a future where AI enhances education while preserving the human touch in teaching and learning. As we forge ahead, our vision for language education involves leveraging AI to enrich the learning experience and enable students to thrive in both academic and professional endeavors. By striking a balance between technology and human guidance, we aim to equip our students with the necessary skills to become proficient and reflective users of AI tools.
This presentation details affordances and limitations of using AI voice chatbots for oral proficiency development. In the past year, voice-input functionality was added to OpenAI’s widely known ChatGPT4 model, and there has been a steep increase in tools that enable users to improve aspects of oral proficiency through AI (e.g., ELSA AI, LangAI, TalkPal, and many others). In this presentation, I present an overview and comparison of available tools, and then I present user data from an adult English language learner, as well as insights from my personal use of these tools while practicing a second language. The presentation begins with a timeline that shows the rapid development of AI tools that have been developed for L2 speaking proficiency development. Then, I compare the functionality of AI tools as it relates to developing L2 oral proficiency. After discussing the tools, I share data from an adult English language learner and from my experience practicing my own L2s using the AI tools, focusing on three main aspects of the tools: user experience, corrective feedback, and adaptation to user’s needs and current proficiency level. Based on the data, I discuss affordances such as the convenience, affordability, and flexibility of the tools for oral proficiency development, and I discuss limitations including the kind of discourse voice chatbots tend to produce, the lack of some paralinguistic and turn-taking cues that normally occur in human-to-human speech, and the inaccurate feedback or misunderstandings that occur in spoken interaction with AI chatbots.
In recent years it has been a daunting task for teachers and educators alike to grasp the concept of AI in higher education. In our case we felt challenged about whether or not we should implement AI into our language learning program with so many unknowns. We soon realized a lack of informative classroom research into practices for adopting AI as an effective learning tool in the language classroom existed. We proceeded with caution and after our initial trials we decided on a free version of Cathoven, an AI application with various teaching and learning tools, and set up a pilot study based on using the CEFR Checker. Primarily we focused on supporting first year language learners using a tool called a CEFR Checker, in two capacities. In the first communication class we implemented and investigated how to use the text analyzer tool with students, in activities whereby students set up private accounts and analyzed their own written text and used vocabulary, verb forms, and sentence structure feedback results to self-evaluate their level of language and performance. Findings showed students gained insight into their own written English production, and that students were motivated and used feedback differently to improve their language for overall development. Students were then given a survey regarding the utilization of the text analyzer tool to inform on their experiences and opinions and other ideas. In the second communication class we applied the text analyzer tool to transcripts of video recordings at the end of the semester for the purpose of examining students' results to explore possible methods for conducting either formative or summative assessment. Findings showed future potential for using this particular AI-CEFR Checker for student grading, as well as designing authentic course materials and other resources for activities for specific level language objectives.
With the development of new technologies, the performance of English grammar checkers has improved significantly in recent years. However, the performance of Chinese grammar checkers is still not satisfactory. Although there are some new Chinese grammar checkers, their performance is still not widely accepted. Although OpenAI ChatGPT includes Chinese language data, few studies have investigated whether Chinese data available via ChatGPT can be used for Chinese grammar checking.
This study thus used the Chinese data from ChatGPT 3.5 and ChatGPT 4.0 to serve as Chinese grammar checkers. To test if the ChatGPT-based Chinese checker can help detect and correct Chinese L2 errors, we used the data provided by a book called "Speed Up Your Chinese" by Routledge. This book is an innovative resource that identifies and explains the 150 common errors that English-speaking learners of Chinese repeatedly make. We then used ChatGPT 3.5 and ChatGPT 4.0 to check and correct this set of 150 common learner errors. After we collected the correction and feedback/explanations from ChatGPT, we further invited three highly experienced Chinese teachers to help check these corrections and explanations. The survey results show that three teachers agree that both ChatGPT 3.5 and ChatGPT 4.0 can help detect learners' errors and produce correct and fluent Chinese sentences. Additionally, ChatGPT can also offer useful and clear explanations. However, teachers found that ChatGPT 4.0 consistently outperformed ChatGPT 3.5 in providing error corrections and explanations.
Even though the Chinese data in the current version of ChatGPT is much smaller compared to the English data in ChatGPT, the outcomes are still very impressive. If Chinese data continues to grow in the future, it can be expected that ChatGPT-based Chinese grammar will become an indispensable tool for most Chinese teachers and learners.
This session is designed to equip educators with the skills to effectively use and understand prompting in AI platforms such as ChatGPT, Bard, and Bing. This interactive workshop will demystify the art and science of prompt engineering, focusing on how carefully constructed prompts can significantly enhance the quality of AI-generated content in an English Language Teaching setting. We'll dive into the mechanics of various AI platforms, demonstrating how nuanced prompting leads to more accurate, relevant, and engaging responses, especially when tailored to the needs of English language learners.
Participants will engage in hands-on activities (bring your devices!) to practice crafting prompts that yield desired outcomes, learning to navigate common challenges and optimize AI interactions. We'll explore strategies for using AI as a teaching aid, from generating customized content to reshaping grammar exercises and beyond. The workshop will also address ethical considerations, ensuring that educators are prepared to use these powerful tools responsibly.
By the end of this session, attendees will have a solid foundation in prompt engineering, empowering them to harness AI's potential to enrich their teaching and offer students a more personalized and effective learning experience. Whether you're brand new to AI or looking to refine your skills, this workshop will provide valuable insights and practical techniques to elevate your educational practice.
This presentation showcases a groundbreaking 28-week study conducted with twenty-five first-year and second-year students, exploring the efficacy of ChatGPT in enhancing English language skills, specifically in composition. The study rigorously tested the impact of ChatGPT on students' English proficiency, engaging them in a comprehensive, AI-integrated learning process.
Each week, students used ChatGPT to read and vocally practice short stories from Ririro's online collection, coupled with listening practices. The curriculum was structured in two phases: in the first half, students summarized these stories in about 100 words, and in the latter half, the summaries extended to approximately 200 words. These summaries were then refined using ChatGPT, providing an interactive and corrective feedback mechanism.
Key to the study was the submission of both the original drafts and the ChatGPT-revised manuscripts. Students conducted analyses on their errors, gaining insights into grammatical and structural aspects of English writing. Additionally, the revised texts were subjected to further scrutiny using tools like DeepL and Grammarly, refining the English quality. These final drafts were then used in text-to-speech applications for auditory reinforcement.
The study's conclusion featured a comprehensive survey, focusing on students' perceptions of ChatGPT's role in their learning. A concluding survey will evaluate the students' perceptions of ChatGPT's impact on their learning process. Preliminary findings indicate significant improvements in various areas: composition skills (64%), idea generation (72%), grammar and structure (68%), vocabulary enrichment (72%), overall satisfaction with the integration of ChatGPT in English teaching (76%), and the value of feedback received (80%).
This presentation will delve into how AI, specifically ChatGPT, can be effectively harnessed to elevate English composition and language proficiency, highlighting methodologies, student engagement, and the transformative potential of AI in language education.
Automatic Writing Evaluation (AWE) has experienced significant growth in the dynamic realm of English as a Foreign Language (EFL) writing, driven by remarkable advancements in Artificial Intelligence (AI) technology. This technological evolution has brought forth a plethora of writing tools characterized by increasingly sophisticated AWE systems that provide precise and adaptable solutions for the challenges faced by EFL students. This study undertakes a comparative analysis of two AWE tools, Grammarly and AI-Writing Assistant, with a focus on evaluating their feedback and suggestions in the context of EFL student writing.
The aim of this comparative study is to shed light on the strengths and limitations inherent in these two AWE tools, providing valuable insights to the field of EFL writing instruction. A total of 30 compositions by EFL students were collected and evaluated using both AWE systems. The compositions are all general writing and have a length of approximately 100-120 words.
The comparative results reveal the distinct strengths exhibited by each AWE system. Grammarly, for instance, demonstrated exceptional performance in correcting article errors, punctuation issues, and specific tense-related errors, which are common challenges for EFL students. On the other hand, AI-Writing Assistant excelled in its ability to provide suggestions regarding broader writing concerns, such as improving writing clarity, revising wordy sentences, suggesting better vocabulary choices, and even offering valuable collocation recommendations.
These insightful findings have significant pedagogical implications and serve as a basis for our recommendations on the effective utilization of both AWE tools in the EFL classroom. It is hoped that this research thus bridges the gap between AWE technology and pedagogical practice, facilitating a more effective integration of these tools into EFL writing classrooms.
This presentation aims to summarize and highlight research findings from an online homework course using the Moodle learning management system. Students were tasked to complete 50 homework activities from a pool of 64 over a semester. Research data compares the performance and completion rates of students assigned 8 activities biweekly against those with unrestricted access to all 64 quizzes throughout the semester. Initial findings suggest that the biweekly group engaged with activities more consistently and frequently over the semester when compared to the unrestricted group. The implication of these findings for online learning and student engagement will be summarized and discussed.
The presentation also addresses the considerable time and effort required for creating and editing settings for such a large number of activities. To remedy this, the researchers secured a Kaken fund to develop the Moodle plugin DripRelease. This freely available plugin enables users to streamline the assignment of a desired number of activities to students over a desired timeframe (e.g. 8 activities every two weeks over the course of a semester). Through a single interface, users can efficiently bulk assign the frequency of activities, availability, group access, and clearance of all settings on a large number of activities in only a few clicks, thus offering significant time-saving benefits for educators. The presentation will include a live demonstration of the main features of the plugin, an update on the current status of the project, and an opportunity for audience members to have some hands-on experience. It is hoped that the plugin can be applied beyond its initial research context and aid in student engagement in wider areas. Audience feedback and suggestions will be most welcome.
The integration of Artificial Intelligence (AI) in Computer-Assisted Language Learning (CALL) has transformed language education by personalizing learning experiences, providing instant feedback, and offering immersive practice opportunities. This presentation explores AI's pivotal role in reshaping traditional pedagogies and its implications for the future of language education. AI in CALL facilitates authentic language acquisition through intelligent algorithms and natural language processing, making quality education more accessible and inclusive. The presentation showcases how AI technologies, such as machine learning and speech recognition, can revolutionize language learning practices, enhance proficiency and fostering learner autonomy. Additionally, the presentation unveils findings from a study of 107 Japanese and international university students' attitudes towards AI integration in English language learning. Students’ expressed enthusiasm for AI-driven tools to improve their language skills, indicating receptiveness to future integration of AI into curricula. Additionally, the presentation demonstrates an original AI website developed by the researchers to enhance spoken language skills, highlighting its potential for learners of all levels and locations. Positive feedback from participants in the study underscores the viability of AI applications in language education. Participants in this presentation will have the opportunity to use the application and give their feedback on it’s usefulness in English language education. Finally, the presentation discusses the future implications of AI in language education, emphasizing its role in redefining classroom dynamics and pedagogical practices. By harnessing AI, educators can create adaptive learning environments that foster linguistic competence and cross-cultural understanding, promoting inclusivity and efficiency in nurturing global competencies. This presentation advocates for a paradigm shift towards AI-driven pedagogies, empowering learners, and educators in their pursuit of linguistic proficiency and cross-cultural communication.
The demand for independence and agency in learning continues to develop as schools change the concept of the classroom. However, students' learning without the help of a teacher or instructor has shown that they need guidance to achieve self-directed learning. To respond to the problem above, the researcher used artificial intelligence (AI) to help accelerate students' learning. This study explored the role of Artificial Intelligence (AI) applications in speaking practice for senior high school students for English learning in Indonesia. The method used in this study is practice and independent evaluation. The supporting tool is ChatGPT for comprehension, vocabulary, and grammar. The other supporting tool is Google Text-to-Speech (GTT), which provides students with a model to improve their pronunciation and fluency. A mixed-method case study approach that utilized quantitative and qualitative data was deployed to effectively address the proposed research questions. Quantitative data was collected through students' self-assessment questionnaires using five-point Likert scale items to assess the degree to which they agreed or disagreed with a set of statements created by the researcher. It was completed by participants using an online anonymous form. As for qualitative data, instructors observed and conducted interviews to gain better insight. The result showed that integrating technology into language learning has enabled students to develop their speaking skills. Students developed their comprehension, expanded their vocabulary, and improved their grammatical structure using ChatGPT. Students' active involvement in language learning impacted their level of agency. Another finding was the improvement of students' pronunciation and fluency with the help of GTT. Students needed to practice several times to gain the appropriate pronunciation and fluency.
In many tertiary educational settings, participation can comprise a significant part of a student’s overall grade for some courses. Transparently documenting this important element can be challenging (Ihsan, 2018, Rasooli et al., 2018; Chang, 2013). For example, many schools do not accept the overt use of easily quantified numbers derived from attendance as a factor when calculating participation grades.
This practically-oriented presentation will discuss my attempts to devise systematic, fair and easily explicable protocols for calculating student participation grades. It is predicated on the use of several tools and techniques that will be discussed in the presentation, notably (a) using spreadsheets and having an easily updated seating plan, (b) regularly calculating student grades, and (c) recording incomplete or late assignments. Together, these components make up the ABC Grade that I use instead of “Participation”. During the presentation, I will demonstrate how to explain ABC Grades in a syllabus, and how to calculate them, as well as how to explain them to students.
Participants in this session should leave with a better idea of how to calculate participation grades in their own settings using spreadsheets and the protocols explained. Ongoing questions, lively participation, and shared experiences from participants are most welcome, albeit within the limited time available.
JALTCALL 2024 Keynote Speaker from Massachusetts institute of Technology
This talk explores the potential of using generative AI for language education, emphasizing the evolving roles and responsibilities of language teachers. It encompasses several key areas of research. The initial focus is on developing strategies for prompt engineering that facilitate the creation of language teaching and learning materials. I will showcase various prompts for content generation that align with individual learners’ needs, interests, and proficiency levels. Second, the pedagogical efficacy of AI-driven language pedagogy will be investigated. This involves understanding the teacher’s evolving role and how language teachers can best utilize AI as a co-teaching tool. I will discuss the importance of harnessing teachers’ prompt writing skills and AI literacy. I will also share practical use case scenarios of generative AI based on my own teaching experiences. Last, I will address some critical issues that arise from the use of generative AI for language instruction. I will raise questions such as: What is the role of language teachers in the age of AI? How should the language curriculum be adapted to incorporate generative AI? What are the ethical and pedagogical implications of using generative AI for language learning? I will provide anecdotal answers to such questions. I conclude my talk by arguing that we need to cultivate “perspective shifts” that enable generative AI to optimize our future language education. The imperative now is not to resist but to adapt.
The dinner reception will be held on the Ground Floor (1F) in "M Patio"
Fluency development is an important aspect of second language vocabulary acquisition. Spending time developing fluency helps to increase the speed that words can be accessed. In the context of reading, it is important to develop this fast access to vocabulary to avoid the vicious cycle of reading discussed in Nuttall (1982) and Coady (1997). One way to develop reading fluency is through speed reading. In this type of reading, learners read simplified material, ideally without the presences of new language features. Previous research has found that learners can increase their reading speed in as few as 20 speed-reading sessions, with each session lasting no more than 10 minutes (Chang, 2010; Chung & Nation, 2006; Tran 2012). Traditionally, speed reading has been implemented using printed copies of readings. However, it is now possible to supplement, or even replace, paper readings with stories that can be read on electronic devices. Nonetheless, little research has been conducted which compares these two different modes of reading in terms of their facilitative effects on L2 reading fluency development. This presentation presents the results from a study which examined the extent that medium of speed reading (paper versus mobile device) facilitates L2 reading speed development. Sixty-eight university students enrolled in a private university in Japan volunteered for the study. Half of the students read short stories using ESL Speed Readings, a free mobile app, while the other half practiced speed reading using paper copies. After six weeks, the two groups switched mediums and read for six more weeks. The presenter will discuss the results and implications of the study.
AI tools such as ChatGPT can be used to great advantage by teachers, classes, and individual students to improve the quality of Academic English writing skills while providing superior and more customized feedback within the parameters of specific writing assignments. The authors intend to showcase the various ways ChatGPT could support best teaching practices and successful methodologies for writing tasks based on the analysis of a corpus of student writing samples collected before the arrival of ChatGPT. Three academic years of student writing samples were analyzed. Undergraduate EFL students of different CEFR levels were instructed to write 100+ words within 10 minutes according to a specific writing theme or prompt. These writing samples were then collected, digitized, and anonymized before they were separately analyzed by each author using ChatGPT. The results of each analysis were compared and interpreted in order to understand trends in student writing which could direct future coursework. Such an objective analysis of student work using ChatGPT can be extremely beneficial in (1) identifying areas that need improvement, (2) evaluating individual student progress, and (3) directly informing ChatGPT-based methodology in the teaching process.
XReading has become a popular platform for many ESL educators to conduct extensive reading in their classes. The convenience and accessibility of XReading can and continues to provide an enjoyable reading (and also listening) experience for students all over the world. From personal experience however, there are issues that arise with the use of XReading. The cost which can be prohibitive for some students and institutions, reading material that does not suit readers’ interests (Collett, 2018, Sartika 2020), and the 'gamification' of the experience with quizzes and metrics - which can obfuscate the experience of reading. Howarth and Bollen (2019) found in their research that of the 730 second year university students introduced to XReading, only half were interested in continuing it after a semester.
What if instead of micromanaging the minutiae of reading rates, accuracy, and comprehension – students were given an opportunity to read things they were genuinely interested in. Leaning into the original goal of extensive reading – that being reading for pleasure. For this project students were asked what types of material they were interested in reading. Then the researcher used ChatGPT and Dall-E to create graded readers for the students.
This research set out to answer the following questions:
1. Will students notice a difference if graded readers were written by AI?
2. Can students enjoy reading AI generated graded readers?
3. Can AI generated graded readers improve students motivation in reading English?
The capabilities and practical applications of artificial intelligence (AI) technologies have advanced rapidly over recent years (Heift & Schulze, 2015). Foreign language education has already been dramatically impacted by these developments, as learners gain access to increasingly powerful AI tools capable not only of improving grammar and vocabulary choices in academic writing, but also of generating highly coherent and cohesive original written texts (Godwin-Jones, 2022). Such tools may hold great potential for learning (Cai et al., 2023), but they also pose an array of complex new problems on matters of pedagogical effectiveness and academic integrity (Chomsky, 2023).
Understanding how key stakeholders use and perceive these technologies is crucial for developing effective and equitable classroom practices and also for educational policy-making, yet systematic research on these topics remains scarce. This presentation will review the key findings on language learner and teacher perceptions of and attitudes towards AI technologies, including AI-augmented machine translation tools and generative AI technologies such as ChatGPT, in the existing CALL literature. Furthermore, a work-in-progress research project on the actual use of such technologies among young adult foreign language learners in Japan for the purposes of English academic writing will also be introduced. Initial findings from a meta-analysis of the relevant existing CALL literature as well as from learner and teacher surveys will be presented. The aim of this session is to encourage constructive discussion among CALL researchers and practitioners about the potentials and the pitfalls of current and emerging AI technologies for language learning and teaching as well as to identify fruitful avenues for future collaborative research.
Recently, a lot of people have been learning a new language through the Internet. One of the practical English language learning tools is a chatbot with Artificial Intelligence (AI). Replika (https://replika.ai/) can be a self-study tool for language learners. It is an AI chatbot that communicates with language learners naturally, and it helps learners learn how to communicate in English. Replika understands what the learners mainly wrote, and it recasts and answers questions. Although Replika cannot speak and is an age verification, learners are able to learn how to communicate with others in English and also realize grammatical errors. Two third-year senior high school students in Japan and one university student in Hawaii used Replika for two weeks. The research was conducted in 2021 and the researcher asked them to chat with Replika for at least one hour per week for two weeks to see how Replika helped students’ communicative abilities. The researcher asked them to send screenshots of their conversation with Replika and see the improvement. The qualitative data show that all students learned new expressions from Replika. One student also mentioned that chatting with Replika was a good motivation because it includes a gamification perspective. When students write long sentences, they earn extra coins and buy special items for Replika. In conclusion, Replika can be a convenient tool for learning English although the conversation is sometimes unnatural. Students get replies quickly and feel that they are chatting with a native speaker because Replika remembers their conversation. The author will propose how the chatbot works for students’ language learning through qualitative data.
Adopting post-reading discussions in English offers several advantages, including enhancing comprehension, stimulating critical thinking, developing interpersonal skills, fostering curiosity, and providing diverse perspectives (Alvermann, 1985; AdLit.org, 2008; UNESCO, n.d.). This practice was implemented as a weekly assignment at a private university within the context of Academic English Reading classes. Non-English major first-year students (Class A) from the College of Social Sciences and second-year English Literature major students (Class B) from the College of Letters participated in these discussions using Padlet, a digital platform facilitating rich media communication (British Council, n.d.). Over the 2023-24 academic year, students engaged in ten discussions during the spring semester and nine or ten discussions for Class B and Class A, respectively, in the fall semester, covering various topics from the textbooks. This approach garnered active participation from most students, who expressed satisfaction with the discussions, citing the acquisition of new information and diverse perspectives from fellow participants. The presentation provides a comprehensive analysis of the benefits and drawbacks of employing Padlet for such discussions, ultimately recommending it as a valuable tool for promoting active learning.
Affect (Ahmed, 2004) has gained more attention in applied linguistics as a predictor of transformative learning action. Specific attention has gone to the multimodal nature of how affect is experienced and its ability to enable transformative action (Kiesling, 2019; Westberg, 2022). This study reports how multimodal video feedback on academic assignments brings about ‘affect’ in students’ interaction with feedback and how this affects their future learning practices. Digital multimodal video feedback is the practice of providing textual track changes while simultaneously recording screen activity and voice-over commentary. Thus far, research on academic feedback has primarily focused on the effectiveness of types of feedback and discursive strategies of delivery. Less attention is paid to how feedback can be designed to become part of students’ future learning practices. This study combined a multimodal discourse analysis (Norris, 2019) of a corpus of feedback videos with follow-up focus group interviews. The interviews aimed to understand how multimodal video feedback was experienced, both emotionally and affectively, and identify the self-reported literacy practices that emanated from the interaction with multimodal feedback in comparison with traditional text-based feedback. This study found that the multimodal nature of video feedback provided participants with multiple semiotic cues to understand feedback, specifically, the metaphysical sense of the mood and spirit in which feedback was provided. Second, this multimodal engagement subsequently brought about a range of affective responses signaling a more active stance toward future learning practices in comparison with traditional text-based feedback. Finally, sharing video feedback online created important affordances to facilitate transformative learning action. Participants could access video feedback free of time and space constraints and reported that this allowed them to access feedback multiple times and integrate it as a learning resource for future learning practices and even professional practices.
JALTCALL 2024 Keynote Speaker from Massachusetts institute of Technology
Large Language Models (LLMs) such as ChatGPT and its siblings have not only marked a sea change in AI research and development, but also had a major ripple effect on fields that may have used AI minimally. In this presentation, I will first provide a brief history of Natural Language Processing, the field that has developed LLMs, and then delve into the foundational aspects of LLMs, providing an overview of their basic mechanisms, functionalities, and limitations. Next, I will survey how this technology is impacting, and in some cases revolutionizing, language learning applications. Finally, I will conclude with thoughts on what we may expect as these technologies evolve. [virtual]
Instead of using proprietary AI offerings, it is possible to run your own private AI within your own laptop or even on a Raspberry Pi. In this hands-on workshop, we will present Ollama, a simple, free and open-source platform for launching and running AI models. Participants will be guided through installing Ollama on their own devices, which is as simple as running a single command or launching a docker container. Next, participants will download and run one of a number of open-source models available that have been trained for a variety of different purposes. Adding a web user interface can also be done, connecting to the Ollama API for your own private "ChatGPT" without needing so much as an external internet connection. Ollama is constantly evolving, we will do our best to map out the different offerings to help you find a usage in your daily teaching practice or research project. Additionally, we will briefly discuss alternatives to Ollama by showing how to use Hugging Face. Participants will leave this workshop with their own completely private Artificial Intelligence.
This presentation will examine the current ability of a Large Language Model, ChatGPT 4, to analyze Discourse Completion Task responses for politeness strategies. The responses to the eight situation DCT meant to elicit 'refusals' were used to create two small corpora: one of University EFL learners and one of University EFL instructors. The instructors' responses were manually coded for use of speech act strategies and used as a baseline to test the LLM. Strategy definitions, examples, and prompts were carefully tested and retooled leading to an eventual accuracy rating of 99.91 percent, when combined with minimal human monitoring. The LLM’s ability to accurately recognize and tabulate the use of speech act strategies can aid researchers and teachers in the analysis of large data sets and identify gaps in strategy use by learners as targets for instruction. The utility of assessing strategy use for pragmatic competence, the process used to prompt the LLM, and the results of the research will be presented.
The Open Vocabulary Level Test (“OpenVLT”) is a computer adaptive, vocabulary level test that generates an English level score based on the CEFR, an internationally recognized standard for English assessment. The test is made available at no cost on EnglishCentral and other sites, including Moodle. The OpenVLT allows teachers, e-learning solution providers and researchers to share an open source, royalty-free, widely-available vocabulary level test that can accurately determine students’ English level. The OpenVLT -- and associated test items, source code and a detailed spec -- are freely available in a repository maintained by the Center For Open Science (OSF), whose mission is to increase openness, integrity, and reproducibility of research.
While financial reports play an important role in guiding investment decisions, the research landscape pertaining to fake news detection in the financial sector remains underexplored with most existing literature primarily focusing on false political and social news identification or on the impact of fake financial news on investor attention and market reaction (Zhang & Liu, 2023). To address this gap and seek to enhance the limitation of pure machine learning algorithms, the present study proposed applying the n-gram approach in natural language processing (NLP). Through investigating word-based 4-grams (i.e., quadgrams) and scrutinizing forms and functions of these lexical strings in the fake financial news articles obtained from the Securities and Exchange Commission (SEC) crackdown of stock promotion schemes in the United States in 2017, the present study identified 18 high-frequency quadgrams. Based on the taxonomy devised by Biber and his colleagues (1999), the findings showed that (1) the 18 quadgrams were dominated by PP-based expressions in terms of structures and (2) functionally, referential quadgrams constituted over 75% of the total tokens and types, followed by an additional subcategory (i.e., subject-specific quadgrams) which contained medical or financial vocabulary. It is hoped that through uncovering n-grams alongside qualitative structural and functional analyses, the findings of this study would not only contribute to the detection of illegitimate news but also draw the attention of consumers or readers to some implicit deceptive linguistic patterns in financial news reports.
This research-based presentation investigates the integration of Artificial Intelligence (AI) tools, like ChatGPT and Grammarly, within language learning to ascertain their impact on enhancing learning while maintaining academic integrity. It focuses on AI's role in facilitating personalized learning experiences, such as generating writing topics, correcting grammatical errors, expanding vocabulary, and offering sentence structure suggestions. These AI-integrated tasks are designed to aid students in developing clear topic sentences, applying grammar lessons, and improving their linguistic proficiency in writing assignments. Additionally, the study incorporates authentic writing activities, including creative writing prompts, reflective writing journals, citation and referencing workshops, peer-to-peer feedback, and manual grammar exercises, to improve students' ability to write authentically. Utilizing surveys, classroom observations, and pre and post-test assessments, the research aims to explore whether these AI tools support or hinder students' development within the Zone of Proximal Development (ZPD), assessing their potential to foster dependency or undermine the authenticity of students' writing. The objective is to identify strategies for educators to integrate AI in language teaching responsibly, enhancing the educational experience while promoting genuine student engagement and upholding academic standards. This study aims to bridge AI's technological advancements with pedagogical objectives, offering insights to navigate the evolving CALL landscape.
What happens when an experienced educational technology teacher who has run Professional Development workshops becomes a student again and goes through the identical learning experiences that he puts his own students through? That's what happened to me when I enrolled in a two-year Master of Creative Writing degree at Macquarie University, in Australia. Suddenly, I was the student on the other end trying to navigate the online Learner Management System utilised by MQU, and while I was familiar with many things such as the user-interface, there were many instances that left me frustrated and questioning my own professional practices. Some examples were simple things such as where global course Announcements were made, while others were more complex like where multimedia materials were housed and how they were accessed by students. While taking this course online, I encountered many frustrations that are probably very similar to some of those that our own students might face.
In this presentation, I want to reflect on how we, as educators using educational technology, can improve the way we set up and implement our courses. By doing so, we can begin to improve and enhance the user-experience of our students and help them overcome preventable technological challenges they might face, particularly using software such as Learner Management Systems that are likely in a second language.
Educational technology is a ubiquitous part of the learner experience now, so it's vital that we understand how to make the transition from classroom to computer as smooth as possible, particularly in a Japanese context, where many students will encounter technologies such as LMSs for the first time when they begin university.
This research aimed to improve the English writing skills of students using a Personalized Learning strategy integrated with Teaching at the Right Level (PL-TaRL). Personal Learning (PL) tailors writing instruction to students' unique needs, emphasizing engagement. Teaching at the Right Level (TaRL) evaluates students' current level and provides gradual complexity to promote improvement in writing ability.
The researchers implemented PL-TaRL through an English writing LMS on a Google site to create personalized learning plans, accommodating diverse needs and levels in a unified platform. The LMS provided choices of materials and exercises adapted from an English textbook supplied by the Indonesian Minister of Education and combined with materials created by the researchers using generative AI such as ChatGPT.
To determine the students' levels, researchers conducted diagnostic tests, and the results were used to guide students in choosing materials and exercises. The researchers also kept adjusting the content and difficulty of the LMS content based on individual performance and progress. They provided students with personalized feedback, ensuring effective and dynamic learning for great writing improvements. The students were from a vocational high school in Indonesia. Data obtained from pre and post-tests and formative portfolio assessments indicated improvements in students' written English. Before the strategy's implementation, most students had writing skill levels equal to the A2 level of CEFR, which then increased to the B2 level after the strategy implementation. This implies that the PL-TaRL learning strategy has positive impacts on students' writing abilities since it provides students with level-based materials and personalized feedback.
Teachers are required to continuously develop their professional duties. However, in practice, many problems affect the traditional way of teachers’ professional development (TPD) such as the inability to reduce the high levels of stress due to overflowing administrative tasks, the lack of competence to integrate technology in learning, limited digital literacy skills to increase knowledge, and time construct to have continuous professional development (CPD). Nowadays, the use of Artificial Intelligence (AI) is considered a way to solve these problems. This research aims to investigate how AI is used in the TPD. This research employed a mobile-based training certification framework by Dahri. This qualitative case study design involved six in-service teachers who had graduated from the Teacher Professional Education (Pendidikan Profesi Guru/PPG) program at a reputable university in Indonesia. The subjects were chosen based on the result of a preliminary study about in-service teachers who use AI for their TPD practice. Interviews were conducted to gather data for the study. Data from the interview were analyzed thematically. The findings indicate that AI provides an inclusive basis for transforming the TPD for several reasons. First, AI supports teachers' personality competence by helping teachers lighten their administrative tasks so they can perform more enthusiastically, freshly, and friendly to provide comfortable learning for students. Second, AI also assists teachers' pedagogical competence by offering easy integration of technology in teaching and learning activities. Third, AI upgrades teachers' professional competence by providing unlimited resources that are easy to access by teachers. Lastly, AI intensifies teachers' social competence since it provides opportunities for teachers in various regions to connect with each other. This study implies that AI is important to be used in today’s TPD programs since its adaptability and flexibility can help teachers keep updated with technological and educational advancements. Since this study has a limitation by not stating a particular AI type being investigated for TPD, it is recommended that further studies investigate a specific AI type in the TPD practice.
This study investigates the integration of AI chatbots, specifically ChatGPT and Bard/Gemini, in advanced-level "Practical English" courses for college freshmen. It examines their impact on student engagement and motivation, aiming to assess the effectiveness of AI-driven chatbots in Task-Based Learning (TBL) activities. Tasks include conversation generation, debates, and problem-solving, tailored to individual student interests and learning styles. Prior to each activity, students are instructed on how to use chatbots effectively. The web presentation tool "Menti" is utilized for instruction and idea solicitation. Objectives include understanding the advantages and challenges of AI in language education and exploring innovative language learning methods. The study evaluates the influence of ChatGPT or Bard/Gemini on student participation and motivation, as well as the blended approach combining AI interaction with human-guided instruction.
Keywords: AI chatbots, Task-Based Learning, student engagement, language education.
No matter how much we may not like to admit it, AI is here to stay. Although there are several advantages to embracing AI in academic life (Lin 2023), AI use is frowned upon by teachers in the author’s own university teaching contexts. Here, two university writing classes were monitored for AI use using Originality.ai, an AI detection software application, over the 2023-24 school year. Students were informed at the beginning of the year regarding the university’s “no AI usage” policy. The results of the monitoring with AI detection software are presented here, along with the feedback from 44 anonymous student surveys handed out at the end of the year. Fifty percent of students reported using AI, though most reported that they only used AI on 5-25% of any individual assignment. The students were also asked why they used AI or did not use it. Answers ranged from using it to “check my writing” to not using it because the university disallowed it. The author discusses the best practices for using AI in the university EFL writing classroom. One of the reasons to use AI is because it can improve student writing, although transparency of use is required.
It is well acknowledged that the COVID-19 global pandemic has resulted in unignorable changes to almost every aspect of people’s life, including foreign language education (UNESCO, 2020). Different from traditional online education that is often planned ahead with preparation, the abrupt transition to online education during COVID-19 has placed great pressure on foreign language teachers (MacIntyre et al., 2020). Similarly, students may have experienced foreign language anxiety (FLA) further in online classes during COVID-19 (Moser et al., 2021; Russell, 2020), due to isolation from teachers and peers (Hurd, 2007). Despite continuing interest in FLA, research is still lacking in directly comparing FLA between online and face-to-face classes (although see Resnik et al., 2023). This study aims to fill in this gap by adopting a quasi-experimental, within-subject design to compare the effects of synchronous-online and face-to-face learning contexts. Specifically, this study explores FLA levels, coping strategies, and their relationships as the same group of students moved from synchronous-online to face-to-face English language classes during COVID-19.
The FLA levels of 47 undergraduate students in a Japanese university were tracked throughout 14 weeks, with 3 waves of data collected from each of the 7-week, synchronous-online and face-to-face phases respectively, using the well-established Foreign Language Classroom Anxiety Scale (Horwitz et al., 1986). Students’ strategies to cope with FLA were also recorded with the recent FLA Self-regulatory Strategy Scale (Guo et al., 2018). Results from statistical analyses showed that students experienced lower FLA levels in the face-to-face than the synchronous-online phase, and that students used the Avoidance strategies less often than the other five categories of strategies, namely, Cognitive, Management, Affective, Social, and Appraisal. It was also found that students with higher FLA levels tended to use the Avoidance and Affective strategies more often. Theoretical and pedagogical implications of the current findings will be discussed.
Students need to understand different cultures across the world to gain intercultural communication skills and raise awareness. Therefore, this speaker asked a professor in Sri Lanka to proceed with a telecollaboration project. Based on the agreement with this professor, the project invited 16 first-year Japanese college students and 10 third-year Sri Lankan college students to participate for two months to learn about cultural differences, including taboos and manners. They were divided into eight groups and started this exchange-culture project by making presentation slides together. Due to the time zones and different communication styles, both groups of students complained of difficulties, so the two lectures shared these students’ voices and tried to observe and support their meetings using Zoom. During this process, each group of Japanese students reported the progress and issues in the classroom and then discussed how to solve these issues in the classroom. As a result, they noticed that WhatsApp, a social media app they were unfamiliar with, was a solution to cover time differences and pronunciation problems. In the end, the Japanese students delivered presentations and sent the recorded videos to the Sri Lankan students. After this project, Japanese students answered survey questions and noted their comments. Moreover, this speaker had face-to-face meetings with three Sri Lankan students in Sri Lanka. The interview data was analyzed to investigate the students’ intercultural communication skills and compare their perspectives toward different cultures. The data, including the video and both students’ perspectives, will be reported in this presentation.,
Despite the growing body of research on the pedagogical aspects of machine translation and ChatGPT in language learning, concerns have been raised about students relying on machine translation to avoid language learning and about ChatGPT increasing the risk of plagiarism and cheating. These concerns may be attributed in part to students' lack of literacy about machine translation and ChatGPT, including their strengths and limitations, their mechanisms, and effective applications for language learning. Therefore, this study conducted a literacy program on machine translation and ChatGPT for Japanese university students majoring in English, with the aim of exploring the impact of acquiring machine translation and ChatGPT literacy on students' motivation to learn English. The literacy program was conducted for a total of 14 sessions (once a week, 90 minutes per session), with surveys conducted at the first and last sessions. The surveys encompassed intrinsic/extrinsic motivation for learning English, willingness to continue learning English even after the development of machine translation and artificial intelligence (AI), what they were able to do regarding machine translation and ChatGPT, and their level of satisfaction. The results showed that, although not statistically significant, this literacy program tended to enhance students' willingness to continue learning English even after development of machine translation and AI, as well as their intrinsic motivation and identified regulation for learning English. Simultaneously, there was a reduction in their amotivation, although this did not show statistical significance as well. This study will further explore the factors within the program that contributed to these observed changes.
Voice recognition technology has seen remarkable advancements in recent years, and its integration into the field of education holds immense potential for transforming the way we teach and learn, particularly in Language Learning. In Reading classes, voice recognition technology is already able to give students personalized instruction and instant feedback. It encourages language learners to improve their pronunciation, accuracy, and reading speed. At the same time, voice recognition tools are aiding educators in assessing students' oral proficiency, saving time.
Looking ahead, voice recognition will likely be used more interactively in Reading education applications. Future systems may, for example, engage with students in natural conversations about a text, improving reading comprehension and advanced oral skills. Applications will hopefully become more affordable and accessible too.
This presentation will explore the current state of voice recognition in Reading education, as well as its benefits and potential.
Intercultural competence is an essential component of effective communication, particularly in foreign language education like English, where learners engage with diverse linguistic and cultural backgrounds. Research highlights the importance of integrating intercultural competence with linguistic proficiency, empowering learners to confidently share their perspectives and enrich the meaning-making process in communication. Our presentation delves into the findings of our research on the impact of ChatGPT on Vietnamese English learners, with a specific emphasis on intercultural competence development. Through interviews with two English-as-a-foreign-language (EFL) teachers and six EFL university students, we explore their perspectives on integrating ChatGPT as a language input source for overall language production. Grounded in Deardorff's (2004) pyramid model of intercultural competence and Coleman's (1968) push and pull framework, our investigation aims to unveil areas of agreement and divergence between teachers and students concerning the optimal use of ChatGPT for enhancing intercultural attitudes, knowledge, and skills. Our discussion emphasizes adapting to the evolving global landscape influencing diverse language use and addresses ethical conflicts related to intellectual property and students' academic integrity. The presentation concludes with reflections, implications, and recommendations for future research, emphasizing a nuanced understanding of technological advancements in foreign language education within non-English speaking countries.
This study focuses on educational artificial intelligence (AI), centering on the impact of AI in transforming educational paradigms. Recent research efforts, including those at Harvard's Next Level Lab (2023) and insights from thought leaders like Donald Clark (2023) and Paul Kim at Stanford’s SMILE lab, highlight the shift from traditional teaching methods to dynamic, just-in-time Socratic learning processes facilitated by AI. Large Language Models (LLMs) like ChatGPT are at the forefront of this shift. As Martin et al. (2020) observe, adopting emerging technologies in education typically starts with isolated case studies before evolving into a comprehensive research body that addresses existing knowledge gaps. Our study aims to add to this nascent body of research. Focusing on integrating AI tools in a first-year English as a Foreign Language (EFL) classroom, we explore the concept of Intelligence Augmentation (IA). This concept promotes a symbiotic relationship between AI and human intelligence, as Hassani et al. (2020) outlined.
Employing a qualitative approach, we examined students' engagement with AI over one semester and their perspectives on its use. A specially designed questionnaire assessed learner motivation, attitudes, and self-efficacy. We adapted interview questions from Hwang et al. (2009) to better understand students' views on the learning method. This data is instrumental in assessing the approach's effectiveness and pinpointing potential areas for enhancement, particularly AI's potential to enhance critical thinking, active learning, and engagement by guiding question creation and tailoring it to individual styles. In summary, the study hopes to advance our understanding of the role of AI in education, particularly in EFL classrooms, by assessing its impact on students' perception of the IA learning approach. It also aims to provide practical insights for educators and policymakers looking to leverage AI in educational settings.
This pilot study investigated two main components: the use of ChatGPT for generating speech scripts, and the integration of DeepL/Google Translate and Google Docs for pronunciation checking and practice. Specifically, ChatGPT served as an online tutor and the generator of speech scripts for the participants. In addition, the audio function of DeepL/Google Translate and the Automatic Speech Recognition (ASR) function of Google Docs were employed for pronunciation checks and reading practice. These elements were selected to determine how task designs utilizing Generative AI and online tools could support effective language learning. Five graduate students from a U.S. university, participated to evaluate the educational integration and impact of Generative AI and online tools. Additionally, an online survey explored the impact of task design on learners’ attitudes toward technology integration in language learning. Participants’ feedback and reflections offered insights into the tools’ effectiveness, usability, and acceptance in their language learning process. Results suggest that strategic task design incorporating Generative AI and online tools not only has the potential to foster autonomous learning but also to positively influence learners' perceptions toward the use of technology. However, the study highlighted the importance of addressing ethical considerations in deploying these technologies, suggesting that responsible and informed use of Generative AI and online tools is crucial in educational settings. The findings have practical implications for designing and implementing online tools and contribute to the discourse on Generative AI integration in language learning and teaching. This presentation will also introduce curriculum ideas utilizing this combination of Generative AI and online tools for future applications.
Second and foreign language educators strive for better methods, conditions, and technology to teach the target language (TL). Due to the day-to-day endeavour, we are rewarded by witnessing learners’ successful TL acquisition. Sharing our research outcomes, therefore, is critical to generating new public knowledge. Since educators are often required to publish to improve career prospects and raise visibility as scholars, choosing the right journal for those purposes is vital. The JALT CALL Journal, a Scopus Q1 journal, has been evolving with Computer-Assisted Language Learning as a research field and will celebrate its 20th anniversary next year. In this session, as the Editor-in-Chief of the JALT CALL Journal, the presenter will explain the factors that need to be considered before preparing a manuscript, the necessary components in an academic paper, and what is looked at during the initial editorial decision. Unfortunately, it is not uncommon to see, even after successful research completion, researchers struggle to publish in academic journals by facing a series of desk rejections without knowing why. This session, therefore, aims to discuss some of the issues concerning our review process and to demystify the world of academic publishing to reduce the risk of rejection.
Language teachers have long used paper-based handouts and worksheets in their classrooms for a range of purposes, including setting up and running communicative activities, delivering lesson content, and providing practice exercises. However, there has been a recent shift away from using paper in class in favor of electronic devices, such as laptops and tablets. In response, teachers have had to re-think how they approach classroom activities that traditionally relied on handouts. How can a teacher, for instance, best set up a find-someone-who interview activity without the use of paper? In this presentation, we provide an introduction and walkthrough of the powerful, versatile, and simple method of creating fillable PDF files. As we demonstrate, fillable PDFs overcome many of the limitations found in trying to deliver materials digitally through online learning management systems or by distributing digital word processing files to students. They allow teachers to easily implement the kind of interactive, communicative activities that have long been at the center of the language classroom. Furthermore, we demonstrate how generative AI tools such as ChatGPT can be used to quickly create properly-formatted classroom activities that can then be converted to fillable PDFs. The coupling of generative AI and fillable PDFs provides a powerful tool to teachers who may feel stranded in the paperless era.
As we consider how to introduce Japanese students to the world of AI, it is important to understand their experience with educational technology in general. This session presents 9 years of survey data into Japanese university students’ experiences with using technology in learning. The surveys were conducted among over 1,000 first-year university students at both private and public universities between 2011 and 2019. The surveys asked about their experience with using technology to learn. The survey results will be presented in this session with both quantitative and qualitative details about what types of things were learned using technology. Knowing what experiences the students come to the classroom with will allow teachers to better plan for the integration of new AI and other technologies into their lessons and advise students on how to effectively use technology in their learning pursuits. Attendees to this session will learn about the experiences students have had with using technology in learning and then will discuss how to leverage those into productive use of technology in the classroom today as we move into the AI era. Attendees will also discuss the challenges they have experienced in integrating technology into language learning and share lessons learned.
This presentation discusses the results of a technological literacy survey of undergraduate English majors (N = 285) conducted in 2022 (n = 176) and 2023 (n = 109) at a small private women’s university in Western Japan. The survey asked participants about their technology use pre-entry, changes in their technology use since entry, preferences regarding the various devices available for accessing applications commonly used in language learning classes, and issues they had with devices (smartphones, tablets, and computers) and applications (required and self-selected) used to complete a range of language-learning assignments. Preferences regarding the devices and applications were also sought in the survey. Comments from interviews with a subset of participants (n = 8) further elucidate the survey findings. Interview questions explored participants’ survey responses to establish a clearer understanding of how students used technology for learning. Results from the Time 1 and Time 2 survey administrations, changes over time for those who completed the survey twice, and comments obtained in the interviews will be provided and the implications for language learning programs at this institution, and others, will be discussed.
In project-based language learning (PBLL), students engage in real-world issues and use the language they are learning to produce products for audiences beyond the classroom. PBLL was incorporated into a business English class for third- and fourth-year university students. In this case, the real-world issue was provided by a local company that wanted to market its Airbnb to English-speaking visitors, and the product the students produced was a series of Instagram posts. The company president visited the classroom virtually to explain the business, provided websites and written documents about the purpose of the project, and provided feedback on students’ draft Instagram posts. At the conclusion of the project, the company used the students’ work on their official Instagram account.
In this presentation, I will explain the process of finding a local partner, the development of the project in collaboration with the company, and what students did in the classroom. I will share class materials, such as the series of assignments that led to the final products submitted to the company. I will conclude by showcasing what went well, areas that need improvement, and prospects for PBLL in future classes.
Few studies has investigated the effect of annotated vlogs on second language (L2) listening comprehension and vocabulary learning in mobile-assisted language learning environment (Çakmak & Erçetin, 2017). Accordingly, this study aims to investigate the effect of annotated vlogs on listening comprehension and vocabulary learning in mobile listening learning environment. This study adopted a quasi-experimental design and recruited 120 non-English-major college students at pre-intermediate level of English proficiency. They were divided into three groups: 40 in control group, 40 in experimental group one (captioned vlogs), and 40 in experimental group two (annotated vlogs). The research instruments comprised pre-and post-tests for listening comprehension and vocabulary retention, and questionnaires for self-efficacy and learning perception for annotated vlog learning. The results of this study presented that the students in the annotated vlog group more significantly improved their listening comprehension than those in the control group and the captioned vlog group. The students in the annotated vlog group had higher vocabulary retention than those in the other groups. However, the students’ vocabulary retention in the annotated vlog group did not significantly differ from that in the captioned vlog group. The students in the annotated vlog group significantly increased their self-efficacy more than those in the other two groups. The students in the annotated vlog group held positive perceptions on mobile annotated vlog learning mode. Relevant pedagogical implications will be suggested for future vlog-assisted English teaching practice in mobile listening learning environment.
Given the essential role of teachers’ feedback in language teaching and learning, a great body of research has been conducted to find ways of boosting the quality of teachers’ comments on students’ performance. More importantly, there has been a consensus among researchers that computer-assisted feedback is likely to yield various merits, including specificity for students and time efficiency for teachers in language classrooms. Nonetheless, little has been written about the use of screen video recording technology, especially through the Loom website, in recording more detailed, vivid, and effective video feedback for learners’ writing assignments, particularly in the Vietnamese context. To find a tool that helps provide feedback for students’ writing assignments more detailed and convenient, I conducted this action research study, aiming at shedding light on the learners’ perspectives towards the use of screen recording technology (Loom) in the teacher’s feedback for their writing assignments. Two data collection instruments, namely surveys and semi-structured interviews, were utilized to collect data from a class of students majoring in English language at a university in Hanoi, Vietnam. The results revealed significant findings that all of the students had positive responses to the video feedback recorded via Loom, which could bring several benefits for both learners and teachers. Compared to conventional text comments, Loom multimedia feedback was said to be not only more detailed and vivid with the aid of both text and audio explanation, which could facilitate students’ deeper understanding, but also more time-saving and convenient for both students and teachers. Loom feedback could also express better emotions from teachers through their tone and intonation, which could barely exist in traditional comments. However, one minor potential drawback was the need for a quiet place and stable internet connection to produce and watch Loom videos.