Risk of LLMs in Education

Introduction

In today’s world, internet access is crucial for various professions. Additionally, online education is becoming increasingly popular and important for many students. Although LLMs can offer valuable resources to improve teaching and learning, they can also hinder education if misused.

The Real Danger of ChatGPT

Academic Integrity & Plagiarism Concerns

Students can utilize advanced language models like ChatGPT to assist in completing their assignments. However, the accessibility of these technologies to anyone with a desktop browser or smartphone app has raised concerns among instructors and school officials regarding academic integrity. While these large language models claim to produce original content, they rely solely on the data they are trained on and lack the ability to properly cite sources. As schools emphasize the importance of original student work, the question arises as to whether text written by LLMs can be claimed as their own by humans.

According to Mike Perkins, in their article Academic Integrity considerations of AI Large Language Models in the post-pandemic era: ChatGPT and beyond (external link), academic integrity policies in higher education are often vague and leave a lot of room for interpretation (2023). Therefore, “we need to consider a range of possible ways that students may use an LLM to support them in the writing of an assessment. Doing this helps us to understand where we may draw a line between acceptable practice, breaches of academic integrity, academic misconduct, and/or plagiarism (Perkins, 2023).” To overcome this, Perkins suggests the use of LLM-based tools should not be considered a violation of academic integrity or misconduct if the students explicitly state how they have used such tools in their writing (2023).

Additional Risks

Critical Thinking

Another concern is that students will allow artificial intelligence technology to think for them rather than thinking for themselves. With a few prompts, students could generate a satisfactory answer without thinking through the problem on their own, causing a reduction in critical thinking skills.

Data Privacy & Security

When students grant LLMs like ChatGPT access to their personal information, there are potential risks associated with sharing this information online. These systems may be able to access student data without permission and could misuse it for purposes unrelated to education.your data, which could potentially be used for purposes other than education.

Bias & Fairness

Artificial intelligence tools can introduce biases in a number of ways, including through the data used to train the AI system. If the data is not diverse enough, it can lead to biased results. Additionally, if the algorithms used to process the data are not designed to be fair and unbiased, they can perpetuate existing biases. Human biases can be unintentionally incorporated into the AI system through the biases of the people who design and train it. It is important to be aware of these potential biases and work to mitigate them in order to ensure that AI technologies are fair and just.

Mitigation

AI tools can also be used to mitigate student use of these technologies, including developing AI-powered plagiarism detection tools that can identify and flag instances of academic dishonesty. However, current technologies are ineffective in recognizing AI-generated content. According to Perkins, “studies assessing the ability of GPT-3 produces output show that as more complex LLMs are used, the ability of humans to detect material drops even further (2023).” Furthermore, current technological detection tools may not accurately detect machine-written text or may inappropriately flag tools used by non-native English speakers (Perkins, 2023). Perkins notes, “any tools used to support in machine detection of LLM output must be continually re-evaluated as new LLMs emerge, as well as methods to avoid detection of any tools are developed, resulting in an ongoing ‘arms race’ scenario (2023).” 

There are alternative methods to prevent the misuse of AI-powered technologies for cheating. One of them is utilizing AI to oversee student behavior on online platforms and identify any unusual or suspicious actions. Furthermore, AI can create customized learning schemes that aid students in enhancing their academic achievements and decreasing the urge to cheat.

Citations

Perkins, M. (2023). Academic Integrity considerations of AI Large Language Models in the post-pandemic era: ChatGPT and beyond. Journal of University Teaching & Learning Practice, 20(2). https://doi.org/10.53761/1.20.02.07