Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

White House refuses to endorse the ‘Christchurch Call’ to block extremist content online

Posted by on May 15, 2019 in Australia, California, Canada, censorship, Facebook, France, freedom of speech, Google, hate crime, hate speech, New Zealand, Social Media, Software, TC, Terrorism, Twitter, United Kingdom, United States, White House, world wide web | 0 comments

The United States will not join other nations in endorsing the “Christchurch Call” — a global statement that commits governments and private companies to actions that would curb the distribution of violent and extremist content online.

“While the United States is not currently in a position to join the endorsement, we continue to support the overall goals reflected in the Call. We will continue to engage governments, industry, and civil society to counter terrorist content on the Internet,” the statement from the White House reads.

The “Christchurch Call” is a non-binding statement drafted by foreign ministers from New Zealand and France meant to push internet platforms to take stronger measures against the distribution of violent and extremist content. The initiative originated as an attempt to respond to the March killings of 51 Muslim worshippers in Christchruch and the subsequent spread of the video recording of the massacre and statements from the killer online.

By signing the pledge, companies agree to improve their moderation processes and share more information about the work they’re doing to prevent terrorist content from going viral. Meanwhile, government signatories are agreeing to provide more guidance through legislation that would ban toxic content from social networks.

Already, Twitter, Microsoft, Facebook and Alphabet — the parent company of Google — have signed on to the pledge, along with the governments of France, Australia, Canada and the United Kingdom.

The “Christchurch Call” is consistent with other steps that government agencies are taking to address how to manage the ways in which technology is tearing at the social fabric. Members of the Group of 7 are also meeting today to discuss broader regulatory measures designed to combat toxic combat, protect privacy and ensure better oversight of technology companies.

For its part, the White House seems more concerned about the potential risks to free speech that could stem from any actions taken to staunch the flow of extremist and violent content on technology platforms.

“We continue to be proactive in our efforts to counter terrorist content online while also continuing to respect freedom of expression and freedom of the press,” the statement reads.”Further, we maintain that the best tool to defeat terrorist speech is productive speech, and thus we emphasize the importance of promoting credible, alternative narratives as the primary means by which we can defeat terrorist messaging.”

Signatories are already taking steps to make it harder for graphic violence or hate speech to proliferate on their platforms.

Last night, Facebook introduced a one-strike policy that would ban users who violate its live-streaming policies after one infraction.

The Christchurch killings are only the latest example of how white supremacist hate groups and terrorist organizations have used online propaganda to create an epidemic of violence at a global scale. Indeed, the alleged shooter in last month’s attack on a synagogue in Poway, Calif., referenced the writings of the Christchurch killer in an explanation for his attack, which he published online.

Critics are already taking shots at the White House for its inability to add the U.S. to a group of nations making a non-binding commitment to ensure that the global community can #BeBest online.


Source: The Tech Crunch

Read More

Strasbourg Shooting Was Terrorism, France Says, as Police Search for Gunman

Posted by on Dec 12, 2018 in Crime and Criminals, Deaths (Fatalities), France, Murders, Attempted Murders and Homicides, Strasbourg (France), Terrorism | 0 comments

The suspect had been flagged by intelligence services for possible religious radicalization, and had served time in prison.
Source: New York Times

Read More

Europe to push for one-hour takedown law for terrorist content

Posted by on Sep 12, 2018 in Artificial Intelligence, EC, Europe, European Union, Freedom of Expression, Government, law enforcement, mass surveillance, online extremism, online freedom, Security, Social, Social Media, Terrorism, terrorist propaganda | 0 comments

The European Union’s executive body is doubling down on its push for platforms to pre-filter the Internet, publishing a proposal today for all websites to monitor uploads in order to be able to quickly remove terrorist uploads.

The Commission handed platforms an informal one-hour rule for removing terrorist content back in March. It’s now proposing turning that into a law to prevent such content spreading its violent propaganda over the Internet.

For now the ‘rule of thumb’ regime continues to apply. But it’s putting meat on the bones of its thinking, fleshing out a more expansive proposal for a regulation aimed at “preventing the dissemination of terrorist content online”.

As per usual EU processes, the Commission’s proposal would need to gain the backing of Member States and the EU parliament before it could be cemented into law.

One major point to note here is that existing EU law does not allow Member States to impose a general obligation on hosting service providers to monitor the information that users transmit or store. But in the proposal the Commission argues that, given the “grave risks associated with the dissemination of terrorist content”, states could be allowed to “exceptionally derogate from this principle under an EU framework”.

So it’s essentially suggesting that Europeans’ fundamental rights might not, in fact, be so fundamental. (Albeit, European judges might well take a different view — and it’s very likely the proposals could face legal challenges should they be cast into law.)

What is being suggested would also apply to any hosting service provider that offers services in the EU — “regardless of their place of establishment or their size”. So, seemingly, not just large platforms, like Facebook or YouTube, but — for example — anyone hosting a blog that includes a free-to-post comment section.

Websites that fail to promptly take down terrorist content would face fines — with the level of penalties being determined by EU Member States (Germany has already legislated to enforce social media hate speech takedowns within 24 hours, setting the maximum fine at €50M).

“Penalties are necessary to ensure the effective implementation by hosting service providers of the obligations pursuant to this Regulation,” the Commission writes, envisaging the most severe penalties being reserved for systematic failures to remove terrorist material within one hour. 

It adds: “When determining whether or not financial penalties should be imposed, due account should be taken of the financial resources of the provider.” So — for example — individuals with websites who fail to moderate their comment section fast enough might not be served the very largest fines, presumably.

The proposal also encourages platforms to develop “automated detection tools” so they can take what it terms “proactive measures proportionate to the level of risk and to remove terrorist material from their services”.

So the Commission’s continued push for Internet pre-filtering is clear. (This is also a feature of the its copyright reform — which is being voted on by MEPs later today.)

Albeit, it’s not alone on that front. Earlier this year the UK government went so far as to pay an AI company to develop a terrorist propaganda detection tool that used machine learning algorithms trained to automatically detect propaganda produced by the Islamic State terror group — with a claimed “extremely high degree of accuracy”. (At the time it said it had not ruled out forcing tech giants to use it.)

What is terrorist content for the purposes of this proposals? The Commission refers to an earlier EU directive on combating terrorism — which defines the material as “information which is used to incite and glorify the commission of terrorist offences, encouraging the contribution to and providing instructions for committing terrorist offences as well as promoting participation in terrorist groups”.

And on that front you do have to wonder whether, for example, some of U.S. president Donald Trump’s comments last year after the far right rally in Charlottesville where a counter protestor was murdered by a white supremacist — in which he suggested there were “fine people” among those same murderous and violent white supremacists might not fall under that ‘glorifying the commission of terrorist offences’ umbrella, should, say, someone repost them to a comment section that was viewable in the EU…

Safe to say, even terrorist propaganda can be subjective. And the proposed regime will inevitably encourage borderline content to be taken down — having a knock-on impact upon online freedom of expression.

The Commission also wants websites and platforms to share information with law enforcement and other relevant authorities and with each other — suggesting the use of “standardised templates”, “response forms” and “authenticated submission channels” to facilitate “cooperation and the exchange of information”.

It tackles the problem of what it refers to as “erroneous removal” — i.e. content that’s removed after being reported or erroneously identified as terrorist propaganda but which is subsequently, under requested review, determined not to be — by placing an obligation on providers to have “remedies and complaint mechanisms to ensure that users can challenge the removal of their content”.

So platforms and websites will be obligated to police and judge speech — which they already do do, of course but the proposal doubles down on turning online content hosters into judges and arbiters of that same content.

The regulation also includes transparency obligations on the steps being taken against terrorist content by hosting service providers — which the Commission claims will ensure “accountability towards users, citizens and public authorities”. 

Other perspectives are of course available… 

The Commission envisages all taken down content being retained by the host for a period of six months so that it could be reinstated if required, i.e. after a valid complaint — to ensure what it couches as “the effectiveness of complaint and review procedures in view of protecting freedom of expression and information”.

It also sees the retention of takedowns helping law enforcement — meaning platforms and websites will continue to be co-opted into state law enforcement and intelligence regimes, getting further saddled with the burden and cost of having to safely store and protect all this sensitive data.

(On that the EC just says: “Hosting service providers need to put in place technical and organisational safeguards to ensure the data is not used for other purposes.”)

The Commission would also create a system for monitoring the monitoring it’s proposing platforms and websites undertake — thereby further extending the proposed bureaucracy, saying it would establish a “detailed programme for monitoring the outputs, results and impacts” within one year of the regulation being applied; and report on the implementation and the transparency elements within two years; evaluating the entire functioning of it four years after it’s coming into force.

The executive body says it consulted widely ahead of forming the proposals — including running an open public consultation, carrying out a survey of 33,500 EU residents, and talking to Member States’ authorities and hosting service providers.

“By and large, most stakeholders expressed that terrorist content online is a serious societal problem affecting internet users and business models of hosting service providers,” the Commission writes. “More generally, 65% of respondent to the Eurobarometer survey considered that the internet is not safe for its users and 90% of the respondents consider it important to limit the spread of illegal content online.

“Consultations with Member States revealed that while voluntary arrangements are producing results, many see the need for binding obligations on terrorist content, a sentiment echoed in the European Council Conclusions of June 2018. While overall, the hosting service providers were in favour of the continuation of voluntary measures, they noted the potential negative effects of emerging legal fragmentation in the Union.

“Many stakeholders also noted the need to ensure that any regulatory measures for removal of content, particularly proactive measures and strict timeframes, should be balanced with safeguards for fundamental rights, notably freedom of speech. Stakeholders noted a number of necessary measures relating to transparency, accountability as well as the need for human review in deploying automated tools.”


Source: The Tech Crunch

Read More

Op-Ed Columnist: John McCain, a Maverick We Can Learn From

Posted by on Aug 26, 2018 in Human Rights and Human Rights Violations, McCain, John, Presidential Election of 2000, Prisoners of War, Terrorism, Torture, United States Politics and Government, Waterboarding | 0 comments

For all our disagreements, his death leaves a great emptiness in Washington.
Source: New York Times

Read More

Toronto Attack Revives Debate Over ISIS’ Call to Arms to the Mentally Ill

Posted by on Jul 26, 2018 in Hussain, Faisal, Islamic State in Iraq and Syria (ISIS), Mental Health and Disorders, Social Media, Terrorism, Toronto (Ontario) | 0 comments

Religious fanaticism or mental disorder? After the Islamic State claimed the gunman who shot 15 in Toronto as one of its one, the debate was rekindled.
Source: New York Times

Read More

Imran Khan, Former Cricket Star, Pulls Into Lead in Pakistan’s Vote Count

Posted by on Jul 25, 2018 in Defense and Military Forces, Khan, Imran, pakistan, Politics and Government, Terrorism | 0 comments

Early results did not point to an outright victory for Mr. Khan, who is widely seen as benefiting from the help of Pakistan’s powerful military.
Source: New York Times

Read More

Violent Extremist or Political Candidate? In Pakistan Election, You Can Be Both

Posted by on Jul 17, 2018 in Ahle Sunnat Wal Jamaat (Pakistan), elections, Farooqi, Aurangzeb, Lashkar-e-Jhangvi, pakistan, Politics and Government, Sharif, Nawaz, Terrorism | 0 comments

Pakistani courts have cleared a number of candidates to run in national elections this month, despite their ties to extremism and their inclusion on terrorism watch lists.
Source: New York Times

Read More

Death Toll in Pakistan Suicide Bombing Rises to 128

Posted by on Jul 14, 2018 in Baluchistan (Pakistan), Deaths (Fatalities), elections, Islamic State in Iraq and Syria (ISIS), pakistan, Terrorism | 0 comments

The country is preparing for elections, but political turmoil and a spate of terrorist attacks on candidates threaten to undermine the credibility of the vote.
Source: New York Times

Read More

ISIS May Be Waning, but Global Threats of Terrorism Continue to Spread

Posted by on Jul 6, 2018 in africa, Al Qaeda, Espionage and Intelligence Services, Europe, Iraq, Islamic State in Iraq and Syria (ISIS), Mattis, James N, Shabab, Social Media, Syria, Targeted Killings, Terrorism, Trump, Donald J, United States Defense and Military Forces, United States Special Operations Command | 0 comments

The U.S. and its allies are still waging a shadow war against a wide range of terrorist threats despite the Islamic State’s defeat in Iraq and Syria.
Source: New York Times

Read More

Deadly Blast Punctures Afghanistan’s Brief Moment of Peace

Posted by on Jun 16, 2018 in Afghanistan, Deaths (Fatalities), Ghani, Ashraf, Nangarhar Province (Afghanistan), Politics and Government, Terrorism | 0 comments

On a day of cease-fire, an explosion left 26 dead in a mixed crowd of Taliban, security forces, and civilians celebrating a remarkable lull in violence.
Source: New York Times

Read More