• Home
  • ACCIDENTS
  • TikTok scales back AI-generated video descriptions after absurd errors
Photo: BBC via Getty Images
Photo: BBC via Getty Images

TikTok scales back AI-generated video descriptions after absurd errors

Experimental summaries
Much like the AI Overviews at the top of most Google search results, TikTok's AI-generated overviews would attempt to summarise a video's content for some users when they clicked to see more of a video's caption.
Liv McMahon

TikTok has rowed back on an AI feature that incorrectly summarised some videos on the platform, including claiming that a celebrity was a fruit.

The company's 'AI overviews' recently began appearing beneath content on the platform to describe what a video was showing, or provide more context.

While only rolled out to some users in the US and the Philippines, the feature's incorrect and bizarre AI-generated summaries of TikTok content - seen beneath videos of celebrities like platform star Charli D'Amelio - have been shared widely.

According to TikTok, its experimental summaries have been tweaked only to suggest products similar to those shown in videos.

News outlet Business Insider first reported the changes.

Much like the AI Overviews at the top of most Google search results, TikTok's AI-generated overviews would attempt to summarise a video's content for some users when they clicked to see more of a video's caption.

Screenshots of examples seen by the BBC showed videos on the platform accurately described, but Business Insider also identified several "wildly inaccurate" AI overviews.

This included one that described a video of dancer Charli D'Amelio as a "collection of various blueberries with different toppings," the publication said.

It saw similarly vague, inaccurate, and strange AI-generated summaries in other TikTok videos featuring celebrities and artists, including Shakira and Olivia Rodrigo.

The feature will now only be used to surface information about items in videos, according to TikTok.

It comes as tech firms look to deploy more AI products on their platforms to boost user engagement. However, some such efforts have been met with user backlash or mockery when these tools go awry.

'Cutting through water'

Posts reacting to TikTok's testing of AI overviews on its videos first began appearing in January.

But it appears the summaries were made more widely available, with several users and creators highlighting AI-generated descriptions containing absurd mistakes in late April.

A recent example shared on Reddit featured a performance by ballroom dancers Reagan and Juli, described in an AI overview on TikTok as "a person repeatedly striking their head with a rubber chicken".

Other examples shared by TikTok users contained similarly strange descriptions.

For instance, AI overviews for two separate videos, neither of which featured violence or tools, said they featured "a person repeatedly striking their head with a hammer".

According to TikTok, users could report and provide feedback on AI overviews.

But this did not stop some from speculating as to whether the platform was "trolling" its users.

"The new AI Overview is so bad it feels like it has to be a joke," wrote TikTok user and creator Brett Vanderbrook alongside his video.

He showed a range of examples where TikTok's AI feature conjured up bizarre descriptions for what was happening in videos - such as a comedy skit described as someone "demonstrating a new, clever technique for cutting through water".

Goblins and glue pizza

TikTok says it has identified the cause of AI overview errors and inconsistencies, but it hasn't detailed what that is.

But generative AI tools often make things up when responding to users, summarising or generating information, and errors can range from hilarious to potentially harmful.

Google was widely mocked in 2024 after its AI Overviews results told users to eat rocks and glue 'pizza'.

Apple later faced criticism after an AI tool designed to summarise notifications created false headlines for the BBC News and the New York Times apps.

The tech giant suspended the feature, saying it would improve and update it.

Since then, AI development has continued, with firms claiming the tech has vastly improved in ability and accuracy, but so-called "hallucinations" persist.

However, ChatGPT-maker OpenAI recently said it identified 'goblin' and 'gremlin' creeping into its systems' responses - a quirk it believes arose after a tool it trained to have a nerdy persona incentivised mentioning the creatures.

False case law or citations appearing in court filings have meanwhile prompted warnings about AI use in legal settings, with AI errors also reportedly causing issues for some governments. 


Comments

Namibian Sun 2026-05-12

No comments have been left on this article

Please login to leave a comment