YouTube’s poor AI training led to rise of child exploitation videos

Social Media

YouTube uses algorithms and human moderators, but it still couldn’t prevent the rise in disturbing, child-exploitative videos on the platform. Why? Well, it’s likely due to various reasons — one of them, according to a BuzzFeed report, is the confusing set of guidelines the company gives its contract workers for rating content. The publication interviewed search quality raters who help train the platform’s search AI to surface the best possible results for queries by rating videos. It found that the workers are usually instructed to give videos high ratings based mostly on production values.

As one rater said:

“Even if a video is disturbing or violent, we can flag it but still have to say it’s high quality [if the task calls for it].”

That means raters have to mark videos as “high quality” even if they have disturbing content, which can give those links a boost in search results. The problem? Child-exploitative videos found on the platform usually have good production values: they typically require some effort to create and are professionally edited.

After the media put the spotlight on the existence of disturbing videos aimed at children, YouTube started asking raters to decide if a video is suitable for 9-to-12-year-old viewers even when unsupervised. They were told to mark videos as “OK” if they think a child can watch it or “NOT OK” if it contains sexually explicit content, violence, crude language, drug use or actions that encourage bad behavior, such as pranks.

However, the rater BuzzFeed interviewed found the examples YouTube gave confusing, at best. Taylor Swift’s Bad Blood music video, for instance, is NOT OK, based on the examples the platform gave. But videos containing moderate animal violence are apparently OK.

Bart Selman, a Cornell University professor of artificial intelligence, told BuzzFeed:

“It’s an example of what I call ‘value misalignment.’ It’s a value misalignment in terms of what is best for the revenue of the company versus what is best for the broader social good of society. Controversial and extreme content — either video, text, or news — spreads better and therefore leads to more views, more use of the platform, and increased revenue.”

YouTube will have to conjure up a more concrete set of guidelines and make rating less confusing for its human workers if it wants to clean up its platform. Otherwise, enlisting 10,000 employees for help in reviewing videos won’t make a significant impact at all. We reached out for YouTube’s response to BuzzFeed’s report and will update this post once we hear back.