Fable, a book app, makes changes after offensive AI messages


FILE — Shelves of books at a seller in Dublin, Sept. 4, 2024. After a feature in the book-tracking app Fable shocked some readers, who flagged “bigoted” language in an artificial intelligence feature that crafts summaries, the company introduced safeguards. (Ellius Grace/The New York Times)

Fable, a popular app for talking about and tracking books, is changing the way it creates personalised summaries for its users after complaints that an artificial intelligence model used offensive language.

One summary suggested that a reader of Black narratives should also read white authors.

In an Instagram post this week, Chris Gallello, the head of product at Fable, addressed the problem of AI-generated summaries on the app, saying that Fable began receiving complaints about “very bigoted racist language, and that was shocking to us.”

He gave no examples, but he was apparently referring to at least one Fable reader’s summary posted as a screenshot on Threads, which rounded up the book choices the reader, Tiana Trammell, had made, saying: “Your journey dives deep into the heart of Black narratives and transformative tales, leaving mainstream stories gasping for air. Don’t forget to surface for the occasional white author, okay?”

Fable replied in a comment under the post, saying that a team would work to resolve the problem. In his longer statement on Instagram, Gallello said that the company would introduce safeguards. These included disclosures that summaries were generated by artificial intelligence, the ability to opt out of them and a thumbs-down button that would alert the app to a potential problem.

Trammell, who lives in Detroit, downloaded Fable in October to track her reading. Around Christmas, she had read books that prompted summaries related to the holiday. But just before the new year, she finished three books by Black authors.

On Dec 29, when Trammell saw her Fable summary, she was stunned. “I thought: ‘This cannot be what I am seeing. I am clearly missing something here’,” she said in an interview Friday. She shared the summary with fellow book club members and on Fable, where others shared offensive summaries that they, too, had received or seen.

One person who read books about people with disabilities was told her choices “could earn an eye-roll from a sloth”. Another said a reader’s books were “making me wonder if you’re ever in the mood for a straight, cis white man’s perspective”.

Gallello said the AI model was intended to create a “fun sentence or two” taken from book descriptions, but some of the results were “disturbing” in what was intended to be a “safe space” for readers. Filters for offensive language and topics failed to stop the offensive content, he added.

Fable’s head of community, Kim Marsh Allee, said in an email Friday that two users received summaries “that are completely unacceptable to us as a company and do not reflect our values”.

She said all of the features that use AI were being removed, including summaries and year-end reading wraps, and a new app version was being submitted to the app store.

The use of AI has become an independent and timesaving but potentially problematic voice in many communities, including religious congregations and news organisations. With AI’s entry in the world of books, Fable’s action highlights the technology’s ability, or failure, to navigate the subtle interpretations of events and language that are necessary for ethical behaviour.

It also asks to what extent employees should check the work of AI models before letting the content loose. Some public libraries use apps to create online book clubs. In California, San Mateo County public libraries offered premium access to the Fable app through its library cards.

Apps including Fable, Goodreads and The StoryGraph have become popular forums for online book clubs, and to share recommendations, reading lists and genre preferences.

Some readers responded online to Fable, saying they were switching to other book-tracking apps or criticising the use of any artificial intelligence in a forum meant to celebrate and amplify human creativity through the written word.

“Just hire actual, professional copywriters to write a capped number of reader personality summaries and then approve them before they go live. 2 million users do not need ‘individually tailored’ snarky summaries,” one reader said in reply to Fable’s statement.

Another reader who learned on social media about the controversy pointed out that the AI model “knew to capitalise Black and not white” but still generated racist content.

She added that it showed some creators of AI technology “lack the deeper understanding of how to apply these concepts toward breaking down systems of oppression and discriminatory perspectives”.

Gallello said that Fable was deeply sorry. “This is not what we want, and it shows that we have not done enough,” he said, adding that Fable hoped to earn back trust.

After she received the summary, Trammell deleted the app.

“It was the presumption that I do not read outside of my own race,” she said. “And the implication that I should read outside of my own race if that was not my prerogative.” ©2025 The New York Times Company

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

How Elon Musk's Space X is looking to gain ground in Italy
Philips sells small chipmaking subsidiary, Telegraaf reports
AI startups drive VC funding resurgence, capturing record US investment in 2024
Aurora shares jump after deal with Nvidia, Continental to deploy self-driving trucks
Meta shelves fact-checking for 'Community Notes' model in major policy reversal
UK anti-trust regulator to launch two probes under new digital markets powers
AI to impact over 30% of jobs in Malaysia over next decade, says minister
CES 2025: Nvidia ramps up AI tech for games, robots and autos
Chipmaker NXP to buy Austria's TTTech Auto for $625 million
Microsoft to invest $3 billion in India, to expand AI and cloud capacity

Others Also Read