Social media groups join forces to counter online terror ("extremist") content

Started by MikeWB, December 05, 2016, 08:22:40 PM

Previous topic - Next topic

MikeWB

They'll use ISIS/Al-Qaeda content as an excuse to censor everything that they don't consider mainstream. They'll probably label Trump's policies as extremist content as well.







Facebook, Google, Microsoft and Twitter on Monday announced they had joined forces in an attempt to curb explicit terrorist imagery online.

The move follows criticism from Brussels that big US social media groups have made insufficient effort to clamp down on hate speech.

In a statement, the technology groups said they were building new technology that would identify extremist content, including terrorist recruitment videos and images of executions, via a digital fingerprint known as a "hash", which would then be compiled into a shared global database. Once created, the hash would be attached like a watermark to content, which would then be easy to identify and take down.

"Our companies will begin sharing hashes of the most extreme and egregious terrorist images and videos we have removed from our services," the companies said. "By sharing this information with each other, we may use the shared hashes to help identify potential terrorist content on our respective hosted consumer platforms."

The project will be presented at the EU Internet Forum on Thursday, with the database launching in early 2017.

The companies said the collaboration was not a knee-jerk response to European demands but had been under development for several months.


Social media companies have long been accused of failing to take enough action to prevent terrorists using their platforms for propaganda and recruitment. This year, the British parliament's home affairs committee said they were "consciously failing" to staunch the flow of terrorist propaganda, adding that it was "alarming" that the companies had only a few hundred employees monitoring networks with billions of accounts.

The European Commission on Sunday signalled new laws might be implemented if the companies did not make a concerted effort to delete hate speech and radical content posted by terrorist groups more quickly.

In the US, the state and justice departments have encouraged technology companies to develop ways to spread counter-extremism content, which targets those at risk of radicalisation with more moderate messages. The Counter Extremism Project, a US non-profit organisation, recently launched technology to identify terrorist content, based on techniques used to locate and take down child pornography.

"For social media platforms, their business models depend on being open, so they are all afraid of that blunt instrument of legislation," said Zahed Amanullah, head of counter narratives at the Institute for Strategic Dialogue, a London think-tank. "They are all trying to improve algorithms that explicitly identify extremist content from a database, even though it's not an exact science."

Although Brussels has heaped pressure on big internet companies to be more assiduous, the commission has proved reluctant to fundamentally alter the legal framework that means they are not liable for content posted by users as long as they act swiftly to remove illicit material once they are aware of it.

For social media platforms, their business models depend on being open, so they are all afraid of that blunt instrument of legislation
Zahed Amanullah, Institute for Strategic Dialogue
But this legal protection does not exist if an internet company becomes an active curator by searching for such content.

Officials in Brussels are discussing a "good Samaritan" clause that would give tech groups more legal protection when they sought to root out content such as hate speech or material that infringed copyright. Industry groups have been lobbying for such a rule since last year.

Critics say this would tilt the law too far in favour of censorship, with internet platforms incentivised to take down content rather than erring on the side of free speech.

"The commission keeps complaining that the online companies are too powerful — and their solution is to have those companies, in the absence of responsibility, use their algorithms to make decisions about what we can say and do online," said Joe McNamee, executive director of EDRI, a group that lobbies for civil rights online.

Facebook and Twitter insist the new hash database will not act as a generalised censorship tool but instead assist human users in flagging the most egregious content.

1) No link? Select some text from the story, right click and search for it.
2) Link to TiU threads. Bring traffic here.

MikeWB

Web giants sign up to EU hate speech rules

Google, Facebook, Twitter and Microsoft have signed up to new EU rules on taking down illegal hate speech as lawmakers and internet giants try to cope with violent racist abuse and technically savvy terrorists online.

The "code of conduct" will require companies to "review the majority" of flagged hate speech within 24 hours — and remove it if necessary — and even develop "counter narratives" to combat the growing problem.

Online hate speech has created a policy minefield for businesses and government. The speed at which the issue has developed has left a regulatory vacuum, which the industry has attempted to fill with its own solutions.

Related article
Cross-border trade online: how Brussels plans to change rules
How the European Commission is moving to create a single digital market
TUESDAY, 6 DECEMBER, 2016

Although many of the code's policies were already in place, the rules mark the first attempt to codify how internet giants deal with hate speech across the EU.

Twitter, for instance, has so far taken down 125,000 terror-related accounts in less than a year, according to Europol. The sheer scale has presented businesses and policymakers with a conundrum, according to Rob Wainwright, the director of Europol. "We have never seen that use of technology before," he said.

The move comes after EU ministers demanded that the bloc work with IT companies to "counter terrorist propaganda" during an emergency meeting in the aftermath of the Brussels terror attacks.

This push to codify the handling of illegal hate speech online has been led in Brussels by Vera Jourova, the commissioner responsible for justice.

"The recent terror attacks have reminded us of the urgent need to address illegal online hate speech," said Ms Jourova. "Social media is unfortunately one of the tools that terrorist groups use to radicalise young people and to spread violence and hatred."

The debate has been controversial. Two groups that had been involved in discussing the new code of conduct dropped out of the program in protest over the agreed rules after being excluded from final negotiations.

Joe McNamee, executive director at European Digital Rights, which defends human rights online, said the deal amounted to "persuading companies like Google and Facebook to sweep offences under the carpet".

Brussels has been trying to make companies such as Google — which runs YouTube, the world's largest video sharing website — take more responsibility for the content they host.

This move has focused mainly on legal issues such as copyright infringement and incitement to violence. Even so, it has triggered criticism from some lawmakers, who argue that giving internet companies the final say on what constitutes hate speech effectively privatises one arm of law enforcement.

Internet companies have been forced to tread carefully between demands for more proactive surveillance of their platforms from governments and the wishes of their users, who are increasingly wary of any state interference.

Karen White, Twitter's head of public policy for Europe, said: "We remain committed to letting the Tweets flow. However, there is a clear distinction between freedom of expression and conduct that incites violence and hate."

1) No link? Select some text from the story, right click and search for it.
2) Link to TiU threads. Bring traffic here.