Google has outlined four steps it’s taking to fight the spread of extremist material on its YouTube video service.
Kent Walker, general counsel at Google, said Sunday the U.S. technology giant is “committed to being part of the solution” to tackling online extremist content.
“Terrorism is an attack on open societies, and addressing the threat posed by violence and hate is a critical challenge for us all,” Walker wrote in a blog post.
“There should be no place for terrorist content on our services.”
The four new steps are:
- Putting more engineering resource into developing further artificial intelligence software that can be trained to identify and remove extremist content.
- Expanding the number of independent experts in YouTube’s Trusted Flagger program. Google will add 50 expert non-government organizations to the 63 organizations that are already part of the program, and support them with additional grants. Google said Trusted Flagger reports are accurate over 90 percent of the time.
- Taking a tougher stance against videos that do not clearly violate YouTube’s rules. For example, a video that has inflammatory religious or supremacist content will appear behind a warning, will not be monetized, recommended or even eligible for users to make comments on. The aim is to make these videos have less engagement so they are harder to find.
- YouTube is working with Jigsaw – a company behind “The Redirect Method” – which uses ad targeting to send potential ISIS recruits to anti-terrorist videos, which could change their mind about joining extremist organizations. Google said that in previous trials of this system, potential recruits have clicked through on the ads at an “unusually high rate” and watched over half a million minutes of video content that “debunks terrorist recruiting messages.”
The latest measures build upon Google’s previous efforts to fight extremist content on its platform amid a broader criticism of internet companies from politicians.