Measuring reading comprehension is hard. Can AI and adaptive tools help?


Adaptive reading software adjusts the level of difficulty for students based on what they are mastering, advancing them to higher levels or pushing them back to more basic instruction based on how well they are performing. — Image by pikisuperstar on Freepik

Artificial intelligence might be able to drive cars, treat disease, and train your front door to recognise your face. But can it crack the toughest nut in literacy: Helping kids comprehend what they read?

AI is evolving to meet reading instruction and assessment needs, some experts say. For instance, some believe it won't be long before tools that use AI's natural language processing capabilities to measure skills like phonemic awareness are commonplace in schools. Intelligent tutors that can coach students to demonstrate in writing an understanding of a text they've read are already emerging.

If AI can improve reading instruction and assessment, it could fill important gaps, educators say.

Reading assessments can serve a variety of purposes: to identify which students need extra help; to diagnose students' trouble spots; and to monitor progress, which can gauge whether a particular intervention is working.

But at this point, there isn't a single digital or analogue product on the market that can do all those things well, said Matthew Burns, a professor of special education at the University of Missouri and director of the University of Missouri Center for Collaborative Solutions for Kids, Practice, and Policy.

Moreover, the most important reading skill – comprehension – is also the toughest to measure and teach, Burns said. Partly that's because it must assess what students know and the strength of their vocabulary, not just whether they can sound out words.

"Our assessment of reading comprehension is very surface level," he said. "We have to figure out a better way to do it. I wouldn't be surprised if AI was part of the solution to get a really good assessment" of reading comprehension.

But he added a big caveat: "I don't think technology can replace a teacher."

Digital adaptive tools get good reviews from teachers but can't do it all

For now, digital adaptive tools – most of which don't include an AI component – are among the most widely used technologies to help teachers assess students' reading abilities.

Adaptive reading software adjusts the level of difficulty for students based on what they are mastering, advancing them to higher levels or pushing them back to more basic instruction based on how well they are performing. Adaptive assessments are also used, though plenty of teachers still continue to measure students' reading ability without digital tools.

Digital adaptive tools can save teachers a lot of time and effort, said Heather Esposito, a technology teacher coach for New Jersey's Cherry Hill district who previously worked as an English teacher and reading specialist.

In the past, she said, teachers might sit with a student while they read a few passages from a story or article, keeping track of errors, self-correction, and other factors to determine the child's fluency level. Then they might ask the student to tell them about what they read, asking different questions to gauge comprehension.

"That takes a lot of time," Esposito said. "So that's why software programs came in to try and help with that."

That's likely why Esposito and other educators generally find adaptive reading software useful, according to a survey of 1,058 educators conducted by the EdWeek Research Center from Jan 26 through Feb 1. Forty-four percent said they think the tech does a better job of accurately assessing a students' reading level than non-adaptive software or pen-and-paper methods, including 14% who said it does a "much better job."

That's compared to just 18% who said it does a worse job, including 4% who said it is "much worse." Thirty-eight percent said the effectiveness is about the same.

The current tools have clear limitations, however, Esposito pointed out. Teachers should never rely just on adaptive tools to assess student reading levels, she said.

Teachers need to closely supervise students taking adaptive assessments because they "aren't foolproof," said Catherine Snow, a professor at Harvard's Graduate School of Education who specialises in children's literacy development.

Students can press a wrong button without realising it, altering their score, Snow said. Or kids could get a low score on an adaptive assessment because it's making them do a boring task and they disengage. While the tools can often adapt to a kids' reading level, many are less able to adapt to a students' particular interests, Snow added.

"Kids have things they want to read about and things they don't want to read about," she said. "We sort of ignore that, with a seven-year-old and say, 'He's not reading this! He's not getting his practice in!' Well, it's some story about dolls and princesses. He doesn't really care."

What's more, students might not take the assessment particularly seriously. "We know as adults that this assessment could be consequential," Snow said. "The kid just thinks it's another stupid thing he's being asked to do."

There are structural weaknesses too. Some digital reading tools examine reading fluency in part by looking at how quickly students read, Snow said. "Those assessments, in my view, incentivize teachers to push for speed reading, rather than for deep reading, which often means you have to slow down."

Like Esposito, Snow finds digital tools particularly lacking when it comes to reading comprehension. "Comprehension is what's really hard to measure," Snow said. "There are very few tests that even hint at the deeper comprehension levels that we would really like kids as young as third or fourth grade to be able to get into."

That's not a trivial problem, she added. "Comprehension is what it's all about, right? Really, that's the reason we're teaching kids to read."

Could AI help measure students' reading comprehension and improve writing?

Some educational technology and literacy organisations are optimistic that adding AI to adaptive reading tools might offer the best opportunity yet to tackle that missing reading comprehension piece.

For instance, Quill, a nonprofit ed-tech literacy organisation, has created an AI-powered tool that can read students' answers to open-ended questions about a passage or article. The tool can then coach students to use evidence from the text, as well as proper grammar, to improve their responses. That can help give students the practice they need to improve both their reading comprehension and writing skills, said the organisation's founder and CEO, Peter Gault.

Building reading comprehension through writing is a contrast from the typical approach, Gault said. "Almost every reading tool today uses multiple-choice questions as the main mechanism for then demonstrating your knowledge of the text. Our perspective on multiple choice is that it is a shallower way of learning."

Khan Academy, a nonprofit digital learning company with more than 145 million registered users, is also considering using AI to help with students' reading comprehension and writing.

"In the next year, there's going to be ways that you can actually do reading comprehension and writing at the same time, where there's a passage, and then the AI essentially works with the student to construct essentially a five-paragraph essay, arguing a point anchored in that essay," said the organisation's founder, Sal Khan, in an interview. "So, it's both reading comprehension and writing at the same time. Stuff like this never happened before."

For her part, Esposito has already been experimenting with the latest version of ChatGPT, the AI-powered writing tool that emerged late last year. She's asked it, for instance, to explain the hero cycle – a common language arts concept – to a 10-year-old who loves video games, or a 15-year-old who reads manga, a popular genre of Japanese graphic novels. The tool produced responses that were much better than she had expected.

"You could take a topic or a concept and ask it to level it" to match the students' reading level and "to make it more meaningful" given a kid's personal interests, Esposito said. And she expects the tools will only improve with time.

"AI is a hard trend," Esposito said, meaning it's here for the long term. ChatGPT is just an early iteration, she said, likening it to the search engines of the late 1990s.

But even powerful AI technology still needs substantial teacher input, she said. "It's about striking a really good balance of seeing the potential that's out there with AI, finding the tools that work best for you and your students and knowing that you can pivot at any point," Esposito said.

Snow seconded that sentiment, and cautioned teachers to rely on their own judgement even as increasingly complex AI reading tools emerge.

"Teachers should always know that their instincts might be better" than the people who designed the software or the school leaders who purchased it, Snow said. "If they think something is not really working very well, it might be because it's not really working very well, and they should be cautious about imposing it on students." – The Charlotte Observer/Tribune News Service

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Musk now says it's 'pointless' to build a $25,000 Tesla for human drivers
Google defeats lawsuit over gift card fraud
Russian court fines Apple for not deleting two podcasts, RIA reports
GlobalFoundries forecasts upbeat Q4 results on strong demand from smartphone makers
Emerson sharpens automation focus with offer for rest of AspenTech in $15 billion deal
Palantir shares surge to record as AI boom powers forecast raise
Tax fraud investigators search Netflix offices in Paris and Amsterdam, says source
Singapore's Keppel to buy Japanese AI-ready data centre
Tesla increases wages for staff at German gigafactory by 4%
Apple explores push into smart glasses with ‘Atlas’ user study

Others Also Read