Speaker
Description
The use of generative AI in academic writing raises concerns about authorship authenticity, especially regarding AI-edited content (Liu et al., 2024; Walters, 2023). This study analyzed 450 essays using a cross-sectional quantitative approach to evaluate detection accuracy. Enhancing training datasets and refining algorithms could improve detection tools for complex academic writing. Results showed that ZeroGPT outperforms SciSpace in detecting student-produced and AI-generated texts, demonstrating high reliability. Refining algorithms and training datasets could enhance detection accuracy.
Keywords: Artificial intelligence, detection tools, academic integrity, coherence, authenticity
Summary
The use of generative AI in academic writing raises concerns about authorship authenticity, especially regarding AI-edited content (Liu et al., 2024; Walters, 2023). This study analyzed 450 essays using a cross-sectional quantitative approach to evaluate detection accuracy. Enhancing training datasets and refining algorithms could improve detection tools for complex academic writing. Results showed that ZeroGPT outperforms SciSpace in detecting student-produced and AI-generated texts, demonstrating high reliability. Refining algorithms and training datasets could enhance detection accuracy.
| Teaching Context | College and university education |
|---|