As universities in China and around the world grapple with the rapid rise of artificial intelligence among their students, a study in Britain has found that frequent use of AI tools could hurt critical thinking abilities.
Fudan University in Shanghai, which in November was the first in China to introduce regulations on the use of AI tools in undergraduate theses, is continuing to review and improve the rules during the trial phase, according to state news agency Xinhua.
In recent weeks, Beijing Normal University, East China Normal University, Communication University of China and Tianjin University of Science and Technology also released regulations on AI use.
Universities in other countries – including leading institutes in Britain, Canada, Germany, Japan, Singapore and the US – are also racing to develop rules that strike a balance between the evolving technology and the integrity of teaching and learning.
According to the British study, published on Jan 3 in the peer-reviewed journal Societies, analysis of responses from more than 650 people aged 17 and over showed evidence of lower critical thinking skills among young people who use AI extensively.
“Younger participants who exhibited higher dependence on AI tools scored lower in critical thinking compared to their older counterparts,” wrote the study’s author Michael Gerlich from the SBS Swiss Business School.
“This trend underscores the need for educational interventions that promote critical engagement with AI technologies, ensuring that the convenience offered by these tools does not come at the cost of essential cognitive skills,” he wrote.
Respondents were recruited online in Britain through different social media platforms and completed a questionnaire which included questions about the frequency of their AI tool usage and how much they relied on the technology.
The questions were designed to evaluate how much the respondents were delegating memory and problem-solving tasks to AI – a phenomenon known as cognitive loading – as well as their critical thinking skills.
Gerlich – who heads the school’s centre for strategic corporate foresight and sustainability, and whose research includes the impact of AI on society – said the observed decline could be attributed to cognitive offloading.
While external aids like notes, calculators and now AI help us to manage information, excessive reliance on them may harm memory retention and critical analysis skills, according to Gerlich.
According to the study, participants aged 17 to 25 displayed higher usage of AI tools and cognitive offloading, while scoring lower on indicators of critical thinking. Respondents over 46 scored less on AI use and cognitive offloading, but more on critical thinking.
The study also found that higher educational attainment was associated with better critical thinking skills, regardless of AI usage, which showed that education could mitigate the technology’s potential adverse effects, Gerlich said.
Fifty participants were also interviewed for the study, with some raising concerns about the transparency and bias of AI recommendations, according to the peer-reviewed paper.
“I sometimes wonder if AI is subtly nudging me toward decisions I wouldn’t normally make,” said one respondent quoted in the study. “I rarely reflect on the biases behind the AI recommendations; I tend to trust them outright,” another said.
Many of those interviewed, especially the young adults, said that tools like virtual assistants and search engines have become integral to their daily lives.
Some of the younger respondents said that “they often relied on AI to remember information, solve problems or make decisions rather than engaging in deeper cognitive processes,” according to the study.
One participant, described as aged between 26 and 45, told the study that “I find myself using AI tools for almost everything – whether it’s finding a restaurant or making a quick decision at work”.
“It saves time, but I do wonder if I’m losing my ability to think things through as thoroughly as I used to,” the participant said.
Conversely, participants aged over 46 were more likely to prefer using “traditional methods of problem-solving and information-gathering, which they felt kept their cognitive skills sharper”.
“I still prefer to read through multiple sources and think critically about the information I gather. I’m cautious about relying too much on AI because I don’t want to lose my ability to analyse and make decisions independently,” one interviewee said.
In China, an article published by Xinhua on Tuesday investigated the growing “AI smell” on college homework amid concerns among teachers that the technology could affect the writing skills of students and their ability to think independently.
Fudan University’s academic affairs office told Xinhua that it is continuing to refine the AI guidelines and assessing whether students are able to express their research ideas effectively.
“AI tools are now commonly used by teachers to assist in teaching and students to enhance their learning efficiency. The use of AI tools requires teachers’ consent,” a representative of the office told Xinhua.
“Educators should help students understand the functions and limitations of AI tools – which should only be used as supporting tools – and inform students about the boundaries of AI tool usage.”
In a separate study published in September, a team from Sweden identified 139 questionable papers on computing, environment, health and other research fields on the academic search engine Google Scholar.
The Swedish researchers said the papers contained common responses used by ChatGPT, including “as of my last knowledge update” and “I don’t have access to real-time data”, but did not declare the use of AI.
While most of the papers appeared in journals that are not indexed in reputable bibliographic databases, some were published in mainstream scientific journals and conference proceedings, according to the study.
Some of the identified papers were found in university databases and were attributed to students, the researchers said.
“The abundance of fabricated ‘studies’ seeping into all areas of the research infrastructure threatens to overwhelm the scholarly communication system and jeopardise the integrity of the scientific record,” they warned. – South China Morning Post