AN artificial intelligence (AI) system that can write essays, solve mathematical problems, generate computer code, hold a chat conversation, and even crack jokes seems hard to imagine, but that is what ChatGPT, a groundbreaking new AI-driven chatbot developed by research company OpenAI, does.
Accessible to the public since November last year, anyone can sign up and start using ChatGPT right away, which was what I did. Here is an excerpt of what ChatGPT produced when I asked to write an article for me on the topic of AI in higher education:
“One of the main benefits of AI in higher education is the ability to personalise the learning experience. AI can analyse a student’s performance and provide tailored feedback and recommendations for improvement. This can help students learn at their own pace and reduce the workload for instructors. AI is also being used to create virtual assistants and tutors that can answer questions and provide support to students. These tools can help students learn more efficiently and effectively, and they can be accessed anytime and anywhere with an Internet connection.”
It’s scarily impressive, and it’s impossible to distinguish whether the text was written by a human or AI. The Generative Pre-trained Transformer (GPT) technology that ChatGPT is based on is smart enough to string together meaningful sentences in a matter of seconds. What’s special about ChatGPT is its seeming ability to exhibit creativity – a quality that was earlier thought to be beyond AI.
Some argue that ChatGPT will eventually replace Google search and become the de facto way in which people get information. The fact that OpenAI is founded by Tesla mogul Elon Musk and funded by the likes of tech giant Microsoft gives some weight to that argument.
As an educator, alarm bells are starting to ring in my head. I have no way to tell whether assignments submitted by my students are written by them or an AI system like ChatGPT.
The Guardian, in its article “AI bot ChatGPT stuns academics with essay-writing skills and usability”, reported that ChatGPT generated answers to exam queries that would have been given full marks by examiners if submitted by a student. It’s also been reported that ChatGPT was able to pass an MBA exam, a US Medical Licensing Exam, and the US Bar Exam to practise law. Universities should be seriously worried.
Cheating and issues relating to academic integrity are nothing new. Even before AI came along, technology made cheating, plagiarism, collusion, and other unwarranted behaviours as simple as cutting and pasting.
Notably, the widespread adoption of plagiarism detection tools such as Turnitin by many universities has proven to be an ineffective deterrent. The article “One in 10 uni students submit assignments written by someone else – and most are getting away with it” in The Conversation suggests that 10% of students engage in some form of cheating, and that 95% of those who cheat are never caught.
With AI, the chance of catching cheaters will be even slimmer, and the temptation to cheat even greater. Contract cheating, where students pay external companies to ghostwrite essays, is already a systemic problem.
Those students can now get ChatGPT to effectively ghostwrite for free. The big question is how will universities maintain academic integrity in the face of rapid advancements in AI? Universities are delusional to think they can fight AI by investing in more advanced AI. Nor can universities sidestep the problem by “trusting” their students under the defence of an honour code.
The short, and possibly only, answer when it comes to student assessment is to rely less on the “digital” and more on the ‘physical’. Think of students sitting for exams in a physical exam hall rather than remotely online. Or students discussing their work before a live classroom audience, responding to questions on the spot. Or students presenting a highly personalised portfolio based on their own experience rather than a generic assignment.
AI is here to stay and can only get smarter. And it’s not just about ChatGPT – there is a plethora of generative AI tools flooding the Internet.
Universities need to embrace the fact that we are living in an AI-inclusive world.
Educators should rethink how they approach teaching, learning and student assessment to take advantage of what AI offers, but at the same time protect academic integrity to ensure that student learning outcomes are not compromised.
Prof Wing Lam is the provost and chief executive officer at University of Reading Malaysia, an international branch campus of University of Reading, United Kingdom. He has held a variety of academic positions in Malaysia, Singapore and the UK. Prof Wing completed his PhD in computer science at King’s College London in 1994. He has published over 80 peer-reviewed articles and journals. His current areas of research interest include technology and innovation. The views expressed here are the writer’s own.