Can students with AI tools pass at university?
As exam season begins, students may be tempted to turn to new artificial intelligence (AI) tools to give them an edge on exams.
Universities have struggled to understand what AI applications like ChatGPT are capable of and introduce guides on how to use them — and now they’re being challenged to teach students how to use them.
Academics from the University of Bath have considered the challenges and opportunities.
“Our first question was, ‘Could students use this to answer our assessment questions?'” says James Fern of ChatGPT — an online tool that can answer questions, including creating essays and emails, in human-like language.
“Multiple choice questions, for example, are handled very well with it.
“We definitely didn’t expect it to do that well… it was almost 100% right.”
But it struggles with more complex questions that require critical thinking from students.
An example from a final assessment is: “Why is it important to understand the timing of exercise in relation to people’s nutritional status? [a technical term, according to James] overweight?”
And there are telltale signs that the answer provided by ChatGPT wasn’t written by a student.
“At first glance, it looks very good – it looks very cleanly written, it looks pretty professional in his language,” says James.
However, some of the statements are more like those of a GCSE student than a university student.
It has a habit of repeating the exact wording of the question in its introductions and conclusions, “only written in slightly different ways”.
And when citing sources of information, as is customary in academic work, they are simply made up.
“They look perfect – they have the right author names, they have the right journal names, the titles all sound very reasonable – they just don’t exist,” says James.
“If you don’t know how large language models work, you could very easily believe that these are real references.”
Since ChatGPT was released to the public about six months ago, many students have been unsure of when they can and cannot use it.
“I might be tempted to use ChatGPT… but right now I’m too scared of it because you might get caught,” says a student walking around campus between classes.
“It’s not yet clear what counts as cheating with ChatGPT,” says another. “If you copied your entire task from ChatGPT, that’s cheating – but it can really help guide you.”
A new advice from the Quality Assurance Agency, which reviews standards at UK universities, is urging them to equip students with AI skills they can take into the world of work.
It encourages them to explain to new and returning students in September how and when to use AI — and adjust courses as necessary.
Marketing lecturer Kim Watts calls it “another tool in the toolbox.” And some students in her department this semester have already started using ChatGPT in coursework that asks them to create a marketing plan.
“I suggest that students go to ChatGPT, those who might not know where to start… and start playing around with prompts,” she says.
“It won’t give you answers – but it can give you ideas.”
Kim demonstrates by asking ChatGPT to create their own marketing plan.
It responds with a series of numbered dots—everything from creating a brand identity to using social media.
But Kim, looking up from her screen, says, “That’s not going to happen.
“Submitting something like that just isn’t detailed enough — it doesn’t show us learning, it doesn’t show critical thinking.”
Neurodivergent students and those for whom English is not their first language will benefit the most from ChatGPT, says Kim.
But any student who chooses to use it is asked to submit their ChatGPT prompts and responses as an attachment to “really show how far they’ve gotten from the chatbot responses.”
exams in summer
As with most universities, Bath’s policy on ChatGPT and other AI tools is a work in progress. From September it should be ready.
After that, a team meets throughout the year to ensure they keep up with rapidly changing technology.
Many employees are now taking personal, supervised summer exams again.
dr Chris Bonfield, who leads a team that helps design reviews, says the “default assumption” is that students shouldn’t be using ChatGPT this year. And if employees choose to do so, they should be clear about their expectations.
The pace at which technology is advancing poses a challenge for universities – but Bath quickly walked away from talks of a ban.
“This tool is not going away,” says Chris.
“To ensure our students are equipped with the skills they need for the future workplace, but also that our degrees remain current, we need to be committed.”
Last week, Geoffrey Hinton, widely credited as the godfather of AI, resigned from Google and said he regrets his work – and that chatbots may soon be smarter than humans.
Prof Verena Rieser, a computer scientist at Heriot-Watt University who has been working in the AI field for two decades, says her own students are “using it in very creative ways” — but chatbots are still in the early stages of development and ” can be used to generate misinformation [a] degree that is obviously very worrying” when it comes to education.
Previous models of ChatGPT weren’t released because they were deemed “too dangerous,” she says.
Its developer, OpenAI, says “like any technology, these tools come with real risks” and it works “to ensure security is built into our system at every layer.”
Since the launch of ChatGPT, other companies have focused their efforts on AI. For example, Google released Bard, which is only available for adults.
“I would expect that soon we’ll see different types of ChatGPT from different companies out there, and hopefully more secure models that actually mitigate the potential threats,” says Verena.
“Right now we don’t really know how to stop the models from giving out false, toxic or hateful information – and that’s a big problem.”