YouTube, the primary source for sharing and consuming online videos, is taking measures to ensure extremist content is restricted in terms of reach, visibility, and profitability. The streaming giant discourages “controversial religious or supremacist content” already, but is putting a more deliberate plan of action in place.
In a blog post titled “An update on our commitment to fight terror content online,” YouTube addresses its concerns about hateful content and reveals a 4-step process for combatting it.
- Better detection and faster removal driven by machine learning: through a mix of technology and human review to help identify and remove such content
- More experts: YouTube is working with more than 15 additional expert NGOs and institutions
- Tougher standards: applying tougher treatment to videos that aren’t illegal but flagged by users as potential violations of YouTube’s policies on hate speech and violent extremism
- Early intervention and expanding counter-extremism work: sensitive keywords will trigger video playlists that directly confront and debunk violent extremist messages
The YouTube team states:
Altogether, we have taken significant steps over the last month in our fight against online terrorism. But this is not the end. We know there is always more work to be done. With the help of new machine learning technology, deep partnerships, ongoing collaborations with other companies through the Global Internet Forum, and our vigilant community we are confident we can continue to make progress against this ever-changing threat. We look forward to sharing more with you in the months ahead.