Facebook launches Online Civil Courage Initiative to tackle rising extremism in the UK

The Online Civil Courage Initiative (OCCI) in the UK is a Facebook partnership with the Institute for Strategic Dialogue
iStock

Facebook has been name-checked in nearly every recent report about the dangers of online extremism, from enabling extremists to broadcast their views through live video to failing to take down hate speech and offensive, dangerous content.

It's now taking action. In partnership with the Institute for Strategic Dialogue, Facebook is launching the Online Civil Courage Initiative (OCCI) in the UK, a counterspeech program to help to tackle online extremism and hate speech. Read more: Facebook needs to stop relying on us to police its content. It's time its AI took responsibility

The initiative is being jointly announced in London by Facebook's chief operating officer Sheryl Sandberg, Strategic Dialogue's CEO Sasha Havlicek, as well as founding partners Brendan Cox, husband of murdered MP Jo Cox and head of the Jo Cox Foundation, Mark Gardner from the Community Security Trust, Fiyaz Mughal at Tell MAMA and Shaukat Warraich from Imams Online.

The OCCI is being set up to offer "financial and marketing support to UK NGOs working to counter online extremism" and will bring together experts to develop best practice and tools. This includes training for NGOs to help them monitor and respond to extremist content, a support desk so they can contact Facebook directly, marketing support for counterspeech campaigns including Facebook advertising credits, knowledge sharing with NGOs, government and other online services; and financial support for academic research on online and offline patterns of extremism.

The launch of OCCI in the UK follows similar launches in Germany in January 2016 and in France in March 2017 and, in the UK, the OCCI will share campaigns, experiences and advice and challenges online using Facebook Groups and the OCCI's UK Facebook page.

Subscribe to WIRED

Tackling extremism online is a grey area and one that has seen extreme measures being proposed in response. Ahead of this month's election, both home secretary Amber Rudd and Theresa May called to ban services from using end-to-end encryption as it gave terrorists and extremists "safe spaces" to promote hate speech. Following the London Bridge and Borough Market terrorist attack, Theresa May said she plans to regulate social media companies and crackdown on terrorist material being posted.

"Big companies" providing web services should be controlled by "international agreements" written to help stop extremist content being shared online, the prime minister said outside 10 Downing Street before this month's election. Read more: How Facebook is using AI to tackle terrorism online

These comments about online laws were included within a series of statements outlining how May believes terrorism should be dealt with in the wake of the third terror attack on the UK in four months. Tougher jail sentences, reinforcing British values, and less tolerance of extremism were also included.

May's speech has since been criticised by terrorism experts, internet researchers, and people working within the technology industry. It has been branded "intellectually lazy". Those who are against the plans say clamping down on social networks could result in changing how those involved in terrorism communicate.

"Even a successful effort to regulate (and therefore neuter) the internet would push terrorists underground", Josh Cowls, a web researcher at MIT, told WIRED. The Open Rights Group agrees, saying that regulations "could push these vile networks into even darker corners of the web, where they will be even harder to observe". May is also warning social networks they will be fined if they fail to tackle the problem, which echoes the sentiment of plans put forward by Germany's Angela Merkel last year.

Last week, Facebook addressed some of these concerns "head on" in a blog post by Monika Bickert, director of global policy management, and Brian Fishman, Facebook's counterterrorism policy manager. In it, the pair says: "Our stance is simple: There’s no place on Facebook for terrorism. We remove terrorists and posts that support terrorism whenever we become aware of them. Although academic research finds that the radicalisation of members of groups like ISIS and Al Qaeda primarily occurs offline, we know the internet does play a role. We believe technology, and Facebook, can be part of the solution.

The blog post continues that Facebook has been cautious about addressing the problem in the past through fear it would suggest the site believes there is an easy technical fix. "It is an enormous challenge to keep people safe on a platform used by nearly 2 billion every month, posting and commenting in more than 80 languages in every corner of the globe," the pair explains. "And there is much more for us to do. But we do want to share what we are working on and hear your feedback so we can do better".

Getty Images / AHMAD AL-RUBAYE / Staff

Measures listed in the post include the advanced use of AI to tackle rising levels of hate speech, running these AI measures alongside human expertise, and partnering with other companies, civil society, researchers and governments. The launch of OCCI forms part of the latter.

Facebook’s algorithms like to predict when you may be thinking of getting married or looking for a job so it can show appropriate ads. As a result, the company has been quick to stress that its AI-led algorithms are used for commercial purposes, not political, legal or moral. This allows Facebook to put the onus on the public to police the site on its behalf, reporting inappropriate, troubling or even blatantly illegal content, while its algorithms can devolve a certain level of responsibility.

That's not to say they're not being used at all in the reporting of hate speech. AI “prevents videos from being reshared in their entirety” and Facebook uses automation to recognise duplicate reports, direct reports to those moderators with the appropriate expertise, identify nude or pornographic content that has been removed before and prevent spam attacks. Elsewhere, Facebook uses PhotoDNA tech to automatically identify known child abuse content from a global shared database overseen by the authorities. Although it falls short of using its own technology to identify new content.

How Facebook plans to tackle terrorism

Image matching: When someone tries to upload a terrorist photo or video, Facebook says its systems look for whether the image matches a known terrorism photo or video. If it has previously removed a propaganda video from ISIS, it can work to prevent other accounts from uploading the same video to the site.

Language understanding: Facebook said it has recently started to experiment with using AI to understand text that "might be advocating for terrorism". This includes hunting for text that has already been removed for praising or supporting terrorist organisations. The machine learning algorithms work on a feedback loop to improve over time.

Removing terrorist clusters: Facebook said: "We know from studies of terrorists that they tend to radicalise and operate in clusters," and that this offline trend is seen online as well. When Facebook identifies Pages, groups, posts or profiles as supporting terrorism, it uses algorithms to “fan out” to try to identify related material that may also support terrorism.

Recidivism: Facebook claims it has improved how fast it can detect new fake accounts created by repeat offenders. This has meant it has been able to reduce the time period that terrorist recidivist accounts are on Facebook.

Cross-platform collaboration: Facebook is working on systems that will help it take action against terrorist accounts its other services, including WhatsApp and Instagram.

At the start of May, Facebook founder Mark Zuckerberg announced the firm is hiring 3,000 more members of its community operations team to review user reports of hate speech, extremism and other offensive material on the social network. Once completed, it will bring the team total to 7,500. The team looks at all types of reports, including hate speech and child exploitation, two issues the social network has been under pressure to dramatically improve upon. Zuckerberg added that Facebook needs to respond faster to reports and is building new ways to make reporting simpler, and the review process faster. Read more: Encryption explained: how apps and sites keep your private data safe (and why that's important)

Last week's blog post expanded on this to add that Facebook now employs more than 150 people that are "exclusively or primarily focused on countering terrorism as their core responsibility". This includes academic experts on counterterrorism, former prosecutors, former law enforcement agents and analysts, and engineers. This specialist team is said to speak nearly 30 languages.

“The recent terror attacks in London and Manchester – like violence anywhere – are absolutely heartbreaking. No one should have to live in fear of terrorism – and we all have a part to play in stopping violent extremism from spreading," Sandberg said ahead of today's launch. “We know we have more to do – but through our platform, our partners and our community we will continue to learn to keep violence and extremism off Facebook.”

Cox added that the OCCI is "a valuable and much-needed initiative from Facebook in helping to tackle extremism. Anything that helps push the extremists even further to the margins is greatly welcome. Social media platforms have a particular responsibility to address hate speech that has too often been allowed to flourish online."

This article was originally published by WIRED UK