Utilitarianism of Generative AI: Refocus on Educational Outcomes
生成式AI在高等教育中的应用应聚焦于教育成果。它通过自动化行政任务、提供实时反馈和消除语言障碍,提升效率和公平性。然而,其使用需与课程目标一致,避免滥用。教育者应设定明确的使用界限,学生需确保学术诚信。生成式AI不是威胁,而是工具,其成功整合取决于各方责任与创新评估体系的发展。
Generative AI has been introduced to higher education with the publication of ChatGPT (Hu, 2023, cited in Chan and Hu, 2023), leading to a debatable topic of responsible use. Generative AI ("GenAI") is a software designed to write human-like text according to given instructions, developed on a transformer algorithm (Dai et al., 2023). Major criticism focuses on its unreliable output and academic integrity (Chan and Hu, 2023); however, the panic does not alleviate the problem (Lim et al., 2023). As a tool, generative AI should be imposed with restrictions only when it is relevant to education objectives. This helps students and colleges refocus on their educational goals. This essay will explain how GenAI is currently integrated into higher education and what it means to students, faculty, and the university.
GenAI is a productive tool for individuals and the education system. A survey of university students in Hong Kong conducted by Chan and Hu (2023) indicates that repetitive administrative work is one of the key reasons supporting their use of GenAI. These uses allow people to focus on creative academic activities and address concerns regarding the negative effects of GenAI when using it in productive affairs. For scholarly works, GenAI also demonstrated a promising future. Two significant examples are summarising and indexing ideas in long texts and brainstorming for research gaps (Chan and Hu, 2023). In addition to individual scholars, GenAI benefits the cause of higher education. A real-time feedback chatbot built with GenAI could help students catch up on their progress and save resources for the system (Chan and Hu, 2023), which originally required a consultation booking and a long wait. With less consumption in the education system, resources can be extended to more students affordably (Dai et al., 2023). Other than that, GenAI further promotes fairness by eliminating language privilege. Thanks to the excellent semantic understanding capabilities of GenAI, scholars using different native languages are less restricted by inaccurate translation software (Lim et al., 2023). However, some researchers (Dai et al., 2023) are worried about a decline in ability after excessive use of assistive tools. While this is a valid concern, it may be more accurately framed as "specialisation", an inevitable process that people use professional tools or agents to solve tasks outside their field to improve their ability in that field when professional knowledge requires a longer education duration. Through which, students and educators could dedicate their time to expected outcomes by using tools like GenAI.
Responsibilities for setting suitable objectives and maintaining fine control lie with the course designer. Educators and institutions are not only supervisors, but coworkers (Dai et al., 2023). Being opposed to incorporating electronic devices into the classroom was popular among educational institutions, which later extended to software. Many these efforts did not turn out to be as good as expected (Lim et al., 2023). Ultimately, it is not a "use or not" problem, but "when to use". Spelling is one of the important targets of primary school education; so, educators do not expect the use of dictionaries in dictation at this stage. As students grow up, such a goal and limitation should be changed to adapt to advanced objectives, and grammar correction software is widely used among academic writers. If the goal of the course is to learn citation systems, then the use of GenAI for generating citations should be prohibited; otherwise, using GenAI for typesetting should be allowed. It is impossible to include excessive requirements which may reduce the efficiency of students and lecturers. But GenAI's "omniscience" might make people more liable to ban it outright, rather than partially. However, just as the use of word processors for essays became normalised, the use of GenAI will likely follow a similar path of acceptance.
While course designers are responsible for regulating AI based on course objectives, authors are responsible for their work (Dai et al., 2023). These obligations include outcome achievement, fact-checking, academic integrity, and all other current standards. For example, AI-generated content may just be paraphrased from prior articles, but not a critical analysis (Lim et al., 2023); therefore, researchers must be familiar with their field and ensure originality. To supervise students for that, when detection of AI meets difficulty (Chan and Hu, 2023), Dai et al. claim that “Plagiarism detection software must advance to effectively identify instances of plagiarism, academic dishonesty, or unauthorized use of intellectual property in written work.” (2023, pp. 10) But to distinguish and quantify subtle linguistic and semantic differences between works is an infeasible feat. Real scholars should pay less attention to such technicalities and focus on making real innovations and contributions, which is the "educational outcome".
In conclusion, GenAI has significant potential to enhance higher education by promoting efficiency and accessibility. The key to integrating GenAI is to align its use with educational objectives and liberate the productivity of such a powerful tool. Lecturer, course designer, and students each bear responsibility in ensuring the ethical use of GenAI. Future efforts should focus on developing robust evaluation systems that can better empirically assess genuine academic innovation and critical thinking, thereby ensuring that GenAI serves as a facilitator of academic progress.
References
Chan, C.K.Y. & Hu, W. (2023). 'Students’ voices on generative AI: perceptions, benefits, and challenges in higher education', International Journal of Educational Technology in Higher Education, 20, pp. 43.
Dai, Y., Liu, A. & Lim, C.P. (2023). 'Reconceptualizing ChatGPT and generative AI as a student-driven innovation in higher education', Procedia CIRP, 119, pp. 84–90.
Lim, W.M., Gunasekara, A., Pallant, J.L., Pallant, J.L., & Pechenkina, E. (2023). 'Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators', The International Journal of Management Education, 21(2).