Fable, a Book App, Makes Changes After Offensive A.I. Messages
Fable, a popular app for talking about and tracking books, is changing the way it creates personalized summaries for its users after complaints that an artificial intelligence model used offensive language.
One summary suggested that a reader of Black narratives should also read white authors.
In an Instagram post this week, Chris Gallello, the head of product at Fable, addressed the problem of A.I.-generated summaries on the app, saying that Fable began receiving complaints about “very bigoted racist language, and that was shocking to us.”
He gave no examples, but he was apparently referring to at least one Fable reader’s summary posted as a screenshot on Threads, which rounded up the book choices the reader, Tiana Trammell, had made, saying: “Your journey dives deep into the heart of Black narratives and transformative tales, leaving mainstream stories gasping for air. Don’t forget to surface for the occasional white author, okay?”
Fable replied in a comment under the post, saying that a team would work to resolve the problem. In his longer statement on Instagram, Mr. Gallello said that the company would introduce safeguards. These included disclosures that summaries were generated by artificial intelligence, the ability to opt out of them and a thumbs-down button that would alert the app to a potential problem.
Ms. Trammell, who lives in Detroit, downloaded Fable in October to track her reading. Around Christmas, she had read books that prompted summaries related to the holiday. But just before the new year, she finished three books by Black authors.
On Dec. 29, when Ms. Trammell saw her Fable summary, she was stunned. “I thought: ‘This cannot be what I am seeing. I am clearly missing something here,’” she said in an interview on Friday. She shared the summary with fellow book club members and on Fable, where others shared offensive summaries that they, too, had received or seen.
One person who read books about people with disabilities was told her choices “could earn an eye-roll from a sloth.” Another said a reader’s books were “making me wonder if you’re ever in the mood for a straight, cis white man’s perspective.”
Mr. Gallello said the A.I. model was intended to create a “fun sentence or two” taken from book descriptions, but some of the results were “disturbing” in what was intended to be a “safe space” for readers. The “playfulness” approach by the A.I. model would be removed, and further steps were being considered, he added.
Fable did not respond to an email on Friday for comment, including questions about how many summaries had been flagged by readers. But Mr. Gallello said that Fable had heard from “a couple of” readers after its filters for offensive language and topics failed to stop the offensive content.
“Clearly in both cases that failed this time around,” he added.
The use of A.I. has become an independent and timesaving but potentially problematic voice in many communities, including religious congregations and news organizations. With A.I.’s entry in the world of books, Fable’s action highlights the technology’s ability, or failure, to navigate the subtle interpretations of events and language that are necessary for ethical behavior.
It also asks to what extent employees should check the work of A.I. models before letting the content loose. Some public libraries use apps to create online book clubs. In California, San Mateo County public libraries offered premium access to the Fable app through its library cards.
Apps including Fable, Goodreads and The StoryGraph have become popular forums for online book clubs, and to share recommendations, reading lists and genre preferences.
Some readers responded online to Fable’s mishap, saying they were switching to other book-tracking apps or criticizing the use of any artificial intelligence in a forum meant to celebrate and amplify human creativity through the written word.
“Just hire actual, professional copywriters to write a capped number of reader personality summaries and then approve them ~before they go live. 2 million users do not need ‘individually tailored’ snarky summaries,” one reader said in reply to Fable’s statement.
Another reader who posted online pointed out that the A.I. model “knew to capitalize Black and not white” but still generated racist content.
She added that it showed some creators of A.I. technology “lack the deeper understanding of how to apply these concepts toward breaking down systems of oppression and discriminatory perspectives.”
Mr. Gallello said that Fable was deeply sorry for “putting out a feature that can do something like that.”
“This is not what we want, and it shows that we have not done enough,” he said, adding that Fable hoped to earn back trust.
After she received the summary, Ms. Trammell deleted the app.
“It was the presumption that I do not read outside of my own race,” she said. “And the implication that I should read outside of my own race if that was not my prerogative.”
Checkout latest world news below links :
World News || Latest News || U.S. News
Source link