Google Training Ad Placement Computers to Be Offended

Over the years, Google trained computer systems to keep copyrighted content and pornography off its YouTube service. But after seeing ads from Coca-Cola, Procter & Gamble and Wal-Mart appear next to racist, anti-Semitic or terrorist videos, its engineers realized their computer models had a blind spot: They did not understand context.

Now teaching computers to understand what humans can readily grasp may be the key to calming fears among big-spending advertisers that their ads have been appearing alongside videos from extremist groups and other offensive messages.

Google engineers, product managers and policy wonks are trying to train computers to grasp the nuances of what makes certain videos objectionable. Advertisers may tolerate use of a racial epithet in a hip-hop video, for example, but may be horrified to see it used in a video from a racist skinhead group.

That ads bought by well-known companies can occasionally appear next to offensive videos has long been considered a nuisance to YouTube’s business. But the issue has gained urgency in recent weeks, as The Times of London and other outlets have written about brands that inadvertently fund extremists through automated advertising — a byproduct of a system in which YouTube shares a portion of ad sales with the creators of the content those ads appear against.

This glitch in the company’s giant, automated process turned into a public-relations nightmare. Companies like AT&T and Johnson & Johnson said they would pull their ads from YouTube, as well as Google’s display advertising business, until they could get assurances that such placement would not happen again.

Consumers watch more than a billion hours on YouTube every day, making it the dominant video platform on the internet and an obvious beneficiary as advertising money moves online from television. But the recent problems opened Google to criticism that it was not doing enough to look out for advertisers. It is a significant problem for a multi-billion-dollar company that still gets most of its revenue through advertising.

“We take this as seriously as we’ve ever taken a problem,” Philipp Schindler, Google’s chief business officer, said in an interview last week. “We’ve been in emergency mode.”

Over the last two weeks, Google has changed what types of videos can carry advertising, barring ads from appearing with hate speech or discriminatory content.

In addition, Google is simplifying how advertisers can exclude specific sites, channels and videos across YouTube and Google’s display network. It is allowing brands to fine-tune the types of content they want to avoid, such as “sexually suggestive” or “sensational/bizarre” videos.

It is also putting in more stringent safety standards by default, so an advertiser must choose to place ads next to more provocative content. Google created an expedited way to alert it when ads appear next to offensive content.

The Silicon Valley giant is trying to reassure companies like Unilever, the world’s second-largest advertiser, with a portfolio of consumer brands like Dove and Ben & Jerry’s. As other brands started fleeing YouTube, Unilever discovered three instances in which its brands appeared on objectionable YouTube channels.


Leave a Reply