Skip to main content
Best News Website or Mobile Service
WAN-IFRA Digital Media Awards Worldwide 2022
Best News Website or Mobile Service
Digital Media Awards Worldwide 2022
Hamburger Menu

Advertisement

Advertisement

Commentary

Commentary: Educators shouldn’t rush to punish students based on what AI detectors say

Universities use AI detectors to “catch” students submitting work generated by artificial intelligence, but these tools can be unreliable and potentially biased, say academics Jasper Roe and Mike Perkins.

Commentary: Educators shouldn’t rush to punish students based on what AI detectors say

Banning AI in education is like trying to hold back a tidal wave with a teacup. (Photo: iStock/SolStock)

New: You can now listen to articles.

This audio is generated by an AI tool.

SINGAPORE: When ChatGPT was first released, it caused a panic in the education sector. Many schools and universities banned its use, fearing it would destroy students’ ability to learn and be tested properly.

However, less than two years on, the tone has changed dramatically. The International Baccalaureate (IB) allows artificial intelligence to be used in the completion of schoolwork, AI tools are acceptable in academic writing for publication, and educators from primary schools to universities are incorporating ChatGPT into school assignments.

While these developments raise ethical concerns, one thing is clear: Banning AI in education is like trying to hold back a tidal wave with a teacup. Instead, we need to learn how to use it.

However, recent publications have stated that while universities in Singapore do encourage the critical of use of AI tools in academic work, they also may use AI detection technology programmes such as Turnitin’s AI detector.

While there is no problem in using these technologies to educate, we need to be crystal clear with students and educators that these tools have limitations and can’t be a basis for punishing students.

Furthermore, familiarising ourselves with the benefits and limitations of the current AI models prepares us for tomorrow’s advancements. Recently, OpenAI released one of the world’s most powerful models, GPT-4o, for public use. GPT-4o handles input in audio, text and visuals, and produces sophisticated outputs.

These tools are only going to get better with time. If we think about the first mobile phone, to the sleek, powerful smartphones of today, we can get a sense of what’s to come.

PITFALLS OF “CATCHING” STUDENTS WITH AI DETECTORS

Despite this, many universities try to use AI detectors to “catch” students submitting AI-generated work and then penalise them for it.

AI detectors are designed to identify text generated by AI systems like ChatGPT. They work by analysing patterns and word usage typical of AI writing tools.

However, these technologies can be unreliable and potentially biased. Our recent research project demonstrates that simple techniques such as adding spelling errors can reduce the effectiveness of AI detectors by up to 22 per cent.

Almost all students who use AI to write essays will be editing and modifying the output - meaning detection won’t work well, if at all. To be blunt, if a student’s work shows up as entirely AI-generated, all it means is that they are not very good at using AI. Only in the simplest cases of copy-and-paste is an AI detector guaranteed to give a positive result.

AI detectors struggle to keep up with quickly changing AI models, and their reliance on standardised measures of what is considered “human” can unfairly disadvantage people who speak English as a second or third language. The potential of falsely accusing students and damaging their future raises serious concerns about the use of AI detectors in academic settings.

Furthermore, this approach is counterintuitive in a world where we should be reaping the benefits of AI. You can’t extoll the advantages of using a calculator and then punish students for not doing math in their heads.

Educators shouldn’t rush to punish students based on what AI detectors say. Instead, they should think of better ways to assess students.

A DIFFERENT APPROACH TO AI USAGE IN EDUCATION

AI tools have much potential in education. They can assist students in brainstorming ideas, structuring their thoughts and editing their work to improve clarity and coherence. By using these tools, students can enhance their digital literacy and prepare for a future where AI will play a significant role in various professional fields.

But since detecting AI is a dead end, what should educators do when they can’t tell if a student’s work is “their own”?

One solution for educators is to move away from a binary “AI or no-AI” policy, and adopt a scaffolded approach that makes clear to students how much AI can be used in completing a task. Educators can provide a range of assessments where AI use is allowed to varying degrees - for instance, AI-generated content can be used to help improve students’ work in essays, but banned at in-person examinations.

This gives educators a picture of students’ knowledge and abilities without technology, and also how adept they are at utilising technology.

Working with colleagues in Vietnam and Australia, we developed a tool called the AI Assessment Scale (AIAS). This scale allows educators to tailor AI usage to the needs of different subjects and assessment types, ensuring that AI enhances learning outcomes without compromising academic integrity.

It empowers teachers to stop fretting over whether their students did or did not use AI and focus on teaching students to engage with AI tools ethically and responsibly. This involves providing guidance on proper citation of AI-generated content and fostering an understanding of the limitations and potential biases of AI tools.

The AI Assessment Scale (Table provided by the authors)

SEEK BETTER WAYS TO ASSESS LEARNING

Universities are taking a step in the right direction by allowing the use of AI tools under strict guidelines. Continuous evaluation and adaptation of assessment methods are necessary to ensure they meet the evolving needs of students and the demands of future workplaces.

By adopting more nuanced AI policies and focusing on ethical usage, educators can harness the benefits of AI while maintaining academic integrity and promoting independent thinking.

We hope to see universities worldwide remove the focus on detecting AI, and instead equip students with knowledge in their subjects and the ethical use of new technologies that will shape their careers to come.

Jasper Roe is Head of Department, Language School at James Cook University, Singapore. Mike Perkins is Associate Professor and Head of the Centre for Research and Innovation at British University Vietnam, Vietnam.

Source: CNA/el

Advertisement

Also worth reading

Advertisement