Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

White House refuses to endorse the ‘Christchurch Call’ to block extremist content online

Posted by on May 15, 2019 in Australia, California, Canada, censorship, Facebook, France, freedom of speech, Google, hate crime, hate speech, New Zealand, Social Media, Software, TC, Terrorism, Twitter, United Kingdom, United States, White House, world wide web | 0 comments

The United States will not join other nations in endorsing the “Christchurch Call” — a global statement that commits governments and private companies to actions that would curb the distribution of violent and extremist content online.

“While the United States is not currently in a position to join the endorsement, we continue to support the overall goals reflected in the Call. We will continue to engage governments, industry, and civil society to counter terrorist content on the Internet,” the statement from the White House reads.

The “Christchurch Call” is a non-binding statement drafted by foreign ministers from New Zealand and France meant to push internet platforms to take stronger measures against the distribution of violent and extremist content. The initiative originated as an attempt to respond to the March killings of 51 Muslim worshippers in Christchruch and the subsequent spread of the video recording of the massacre and statements from the killer online.

By signing the pledge, companies agree to improve their moderation processes and share more information about the work they’re doing to prevent terrorist content from going viral. Meanwhile, government signatories are agreeing to provide more guidance through legislation that would ban toxic content from social networks.

Already, Twitter, Microsoft, Facebook and Alphabet — the parent company of Google — have signed on to the pledge, along with the governments of France, Australia, Canada and the United Kingdom.

The “Christchurch Call” is consistent with other steps that government agencies are taking to address how to manage the ways in which technology is tearing at the social fabric. Members of the Group of 7 are also meeting today to discuss broader regulatory measures designed to combat toxic combat, protect privacy and ensure better oversight of technology companies.

For its part, the White House seems more concerned about the potential risks to free speech that could stem from any actions taken to staunch the flow of extremist and violent content on technology platforms.

“We continue to be proactive in our efforts to counter terrorist content online while also continuing to respect freedom of expression and freedom of the press,” the statement reads.”Further, we maintain that the best tool to defeat terrorist speech is productive speech, and thus we emphasize the importance of promoting credible, alternative narratives as the primary means by which we can defeat terrorist messaging.”

Signatories are already taking steps to make it harder for graphic violence or hate speech to proliferate on their platforms.

Last night, Facebook introduced a one-strike policy that would ban users who violate its live-streaming policies after one infraction.

The Christchurch killings are only the latest example of how white supremacist hate groups and terrorist organizations have used online propaganda to create an epidemic of violence at a global scale. Indeed, the alleged shooter in last month’s attack on a synagogue in Poway, Calif., referenced the writings of the Christchurch killer in an explanation for his attack, which he published online.

Critics are already taking shots at the White House for its inability to add the U.S. to a group of nations making a non-binding commitment to ensure that the global community can #BeBest online.


Source: The Tech Crunch

Read More

UK Far Right activist circumvents Facebook ban to livestream threats

Posted by on Mar 5, 2019 in Alex Jones, Europe, Facebook, far right, Google, hate speech, online platforms, Policy, Social, Social Media, social media platforms, social media tools, Stephen Yaxley-Lennon, Tommy Robinson, United Kingdom, YouTube | 0 comments

Stephen Yaxley-Lennon, a Far Right UK activist who was permanently banned from Facebook last week for repeatedly breaching its community standards on hate speech, was nonetheless able to use its platform to livestream harassment of an anti-fascist blogger whom he doorstepped at home last night.

UK-based blogger Mike Stuchbery detailed the intimidating incident in a series of tweets earlier today, writing that Yaxley-Lennon appeared to have used a friend’s Facebook account to circumvent the ban on his own Facebook and Instagram pages.

In recent years Yaxley-Lennon, who goes by the moniker ‘Tommy Robinson’ on social media, has used online platforms to raise his profile and solicit donations to fund Far Right activism.

He has also, in the case of Facebook and Twitter, fallen foul of mainstream tech platforms’ community standards which prohibit use of their tools for hate speech and intimidation. Earning himself a couple of bans. (At the time of writing Yaxley-Lennon has not been banned from Google-owned YouTube .)

Though circumventing Facebook’s ban appears to have been trivially easy for Yaxley-Lennon, who, as well as selling himself as a Far Right activist called “Tommy Robinson”, previously co-founded the Islamophobic Far Right pressure group, the English Defence League.

Giving an account of being doorstepped by Yaxley-Lennon in today’s Independent, Stuchbery writes: “The first we knew of it was a loud, frantic rapping on my door at around quarter to 11 [in the evening]… That’s when notifications began to buzz on my phone — message requests on Facebook pouring in, full of abuse and vitriol. “Tommy” was obviously livestreaming his visit, using a friend’s Facebook account to circumvent his ban, and had tipped off his fans.”

A repost (to YouTube) of what appears to be a Facebook Live stream of the incident corroborates Stuchbery’s account, showing Yaxley-Lennon outside a house at night where can be seen shouting for “Mike” to come out and banging on doors and/or windows.

At another point in the same video Yaxley-Lennon can be seen walking away when he spots a passerby and engages them in conversation. During this portion of the video Yaxley-Lennon publicly reveals Stuchbery’s address — a harassment tactic that’s known as doxxing.

He can also be heard making insinuating remarks to the unidentified passerby about what he claims are Stuchbery’s “wrong” sexual interests.

In another tweet today Stuchbery describes the remarks are defamatory, adding that he now intends to sue Yaxley-Lennon.

Stuchbery has also posted several screengrabs to Twitter, showing a number of Facebook users who he is not connected to sending him abusive messages — presumably during the livestream.

During the video Yaxley-Lennon can also be heard making threats to return, saying: “Mike Stuchbery. See you soon mate, because I’m coming back and back and back and back.”

In a second livestream, also later reposted to YouTube, Yaxley-Lennon can be heard apparently having returned a second time to Stuchbery’s house, now at around 5am, to cause further disturbance.

Stuchbery writes that he called the police to report both visits. In another tweet he says they “eventually talked ‘Tommy’ into leaving, but not before he gave my full address, threatened to come back tomorrow, in addition to making a documentary ‘exposing me’”.

We reached out to Bedfordshire Police to ask what it could confirm about the incidents at Stuchbery’s house and the force’s press office told us it had received a number of enquiries about the matter. A spokeswoman added that it would be issuing a statement later today. We’ll update this post when we have it.  

Stuchbery also passed us details of the account he believes was used to livestream the harassment — suggesting it’s linked to another Far Right activist, known by the moniker ‘Danny Tommo’, who was also banned by Facebook last week.

Though the Facebook account in question was using a different moniker — ‘Jack Dawkins’. This suggests, if the account did indeed belong to the same banned Far Right activist, he was also easily able to circumvent Facebook’s ban by creating a new account with a different (fake) name and email.

We passed the details of the ‘Jack Dawkins’ account to Facebook and since then the company appears to have suspended the account. (A message posted to it earlier today claimed it had been hacked.)

The fact of Yaxley-Lennon being able to use Facebook to livestream harassment a few days after he was banned underlines quite how porous Facebook’s platform remains for organized purveyors of hate and harassment. Studies of Facebook’s platform have previously suggested as much.

Which makes high profile ‘Facebook bans’ of hate speech activists mostly a crisis PR exercise for the company. And indeed easy PR for Far Right activists who have been quick to seize on and trumpet social media bans as ‘evidence’ of mainstream censorship of their point of view — liberally ripping from the playbook of US hate speech peddlers, such as the (also ‘banned’) InfoWars conspiracy theorist Alex Jones. Such as by posting pictures of themselves with their mouths gagged with tape.

Such images are intended to make meme-able messages for their followers to share. But the reality for social media savvy hate speech activists like Jones and Yaxley-Lennon looks nothing like censorship — given how demonstrably easy it remains for them to circumvent platform bans and carry on campaigns of hate and harassment via mainstream platforms.

We reached out to Facebook for a response to Yaxley-Lennon’s use of its livestreaming platform to harass Stuchbery, and to ask how it intends to prevent banned Far Right activists from circumventing bans and carrying on making use of its platform.

The company declined to make a public statement, though it did confirm the livestream had been flagged as violating its community standards last night and was removed afterwards. It also said it had deleted one post by a user for bullying. It added that it has content and safety teams which work around the clock to monitor Live videos flagged for review by Facebook users.

It did not confirm how long Yaxley-Lennon’s livestream was visible on its platform.

Stuchbery, a former history teacher, has garnered attention online writing about how Far Right groups have been using social media to organize and crowdfund ‘direct action’ in the offline world, including by targeting immigrants, Muslims, politicians and journalists in the street or on their own doorsteps.

But the trigger for Stuchbery being personally targeted by Yaxley-Lennon appears to be a legal letter served to the latter’s family home at the weekend informing him he’s being sued for defamation.

Stuchbery has been involved in raising awareness about the legal action, including promoting a crowdjustice campaign to raise funds for the suit.

The litigation relates to allegations Yaxley-Lennon made online late last year about a 15-year-old Syrian refugee schoolboy called Jamal who was shown in a video that went viral being violently bullied by white pupils at his school in Northern England.

Yaxley-Lennon responded to the viral video by posting a vlog to social media in which he makes a series of allegations about Jamal. The schoolboy’s family have described the allegations as defamatory. And the crowdjustice campaign promoted by Stuchbery has since raised more than £10,000 to sue Yaxley-Lennon.

The legal team pursuing the defamation litigation has also written that it intends to explore “routes by which the social media platforms that provide a means of dissemination to Lennon can also be attached to this action”.

The video of Yaxley-Lennon making claims about Jamal can still be found on YouTube. As indeed can Yaxley-Lennon’s own channel — despite equivalent pages having been removed from Facebook and Twitter (the latter pulled the plug on Yaxley-Lennon’s account a year ago).

We asked YouTube why it continues to provide a platform for Yaxley-Lennon to amplify hate speech and solicit donations for campaigns of targeted harassment but the company declined to comment publicly on the matter.

It did point out it demonetized Yaxley-Lennon’s channel last month, having determined it breaches its advertising policies.

YouTube also told us that it removes any video content that violates its hate speech policies — which do prohibit the incitement of violence or hatred against members of a religious community.

But by ignoring the wider context here — i.e. Yaxley-Lennon’s activity as a Far Right activist — and allowing him to continue broadcasting on its platform YouTube is leaving the door open for dog whistle tactics to be used to signal to and stir up ‘in the know’ followers — as was the case with another Internet savvy operator, InfoWars’ Alex Jones (until YouTube eventually terminated his channel last year).

Until last week Facebook was also ignoring the wider context around Yaxley-Lennon’s Far Right activism — a decision that likely helped him reach a wider audience than he would otherwise have been able to. So now Facebook has another full-blown hate speech ‘influencer’ going rogue on its platform and being cheered by an audience of followers its tools helped amass.

There is, surely, a lesson here.

Yet it’s also clear mainstream platforms are unwilling to pro-actively and voluntarily adapt their rules to close down malicious users who seek to weaponize social media tools to spread hate and sew division via amplified harassment.

But if platforms won’t do it, it’ll be left to governments to curb social media’s antisocial impacts with regulation.

And in the UK there is now no shortage of appetite to try; the government has a White Paper on social media and safety coming this winter. While the official opposition has said it wants to create a new regulator to rein in online platforms and even look at breaking up tech giants. So watch this space.

Public attitudes to (anti)social media have certainly soured — and with livestreams of hate and harassment it’s little wonder.

“Perhaps the worst thing, in the cold light of day, is the near certainty that the “content” “Tommy” produced during his stunt will now be used as a fundraising tool,” writes Stuchbery, concluding his account of being on the receiving end of a Facebook Live spewing hate and harassment. “If you dare to call him out on his cavalcade of hate, he usually tries to monetize you. It is a cruel twist.

“But most of all, I wonder how we got in this mess. I wonder how we got to a place where those who try to speak out against hatred and those who peddle it are threatened at their homes. I despair at how social media has become a weapon wielded by some, seemingly with impunity, to silence.”


Source: The Tech Crunch

Read More

It’s time for Facebook and Twitter to coordinate efforts on hate speech

Posted by on Sep 1, 2018 in Alex Jones, Cambridge Analytica, Facebook, Government, hate speech, infowars, Policy, Section 230, Social, Twitter, YouTube | 0 comments

Since the election of Donald Trump in 2016, there has been burgeoning awareness of the hate speech on social media platforms like Facebook and Twitter. While activists have pressured these companies to improve their content moderation, few groups (outside of the German government) have outright sued the platforms for their actions.

That’s because of a legal distinction between media publications and media platforms that has made solving hate speech online a vexing problem.

Take, for instance, an op-ed published in the New York Times calling for the slaughter of an entire minority group.  The Times would likely be sued for publishing hate speech, and the plaintiffs may well be victorious in their case. Yet, if that op-ed were published in a Facebook post, a suit against Facebook would likely fail.

The reason for this disparity? Section 230 of the Communications Decency Act (CDA), which provides platforms like Facebook with a broad shield from liability when a lawsuit turns on what its users post or share. The latest uproar against Alex Jones and Infowars has led many to call for the repeal of section 230 – but that may lead to government getting into the business of regulating speech online. Instead, platforms should step up to the plate and coordinate their policies so that hate speech will be considered hate speech regardless of whether Jones uses Facebook, Twitter or YouTube to propagate his hate. 

A primer on section 230 

Section 230 is considered a bedrock of freedom of speech on the internet. Passed in the mid-1990s, it is credited with freeing platforms like Facebook, Twitter, and YouTube from the risk of being sued for content their users upload, and therefore powering the exponential growth of these companies. If it weren’t for section 230, today’s social media giants would have long been bogged down with suits based on what their users post, with the resulting necessary pre-vetting of posts likely crippling these companies altogether. 

Instead, in the more than twenty years since its enactment, courts have consistently found section 230 to be a bar to suing tech companies for user-generated content they host. And it’s not only social media platforms that have benefited from section 230; sharing economy companies have used section 230 to defend themselves, with the likes of Airbnb arguing they’re not responsible for what a host posts on their site. Courts have even found section 230 broad enough to cover dating apps. When a man sued one for not verifying the age of an underage user, the court tossed out the lawsuit finding an app user’s misrepresentation of his age not to be the app’s responsibility because of section 230.

Private regulation of hate speech 

Of course, section 230 has not meant that hate speech online has gone unchecked. Platforms like Facebook, YouTube and Twitter all have their own extensive policies prohibiting users from posting hate speech. Social media companies have hired thousands of moderators to enforce these policies and to hold violating users accountable by suspending them or blocking their access altogether. But the recent debacle with Alex Jones and Infowars presents a case study on how these policies can be inconsistently applied.  

Jones has for years fabricated conspiracy theories, like the one claiming that the Sandy Hook school shooting was a hoax and that Democrats run a global child-sex trafficking ring. With thousands of followers on Facebook, Twitter, and YouTube, Jones’ hate speech has had real life consequences. From the brutal harassment of Sandy Hook parents to a gunman storming a pizza restaurant in D.C. to save kids from the restaurant’s nonexistent basement, his messages have had serious deleterious consequences for many. 

Alex Jones and Infowars were finally suspended from ten platforms by our count – with even Twitter falling in line and suspending him for a week after first dithering. But the varying and delayed responses exposed how different platforms handle the same speech.  

Inconsistent application of hate speech rules across platforms, compounded by recent controversies involving the spread of fake news and the contribution of social media to increased polarization, have led to calls to amend or repeal section 230. If the printed press and cable news can be held liable for propagating hate speech, the argument goes, then why should the same not be true online – especially when fully two-thirds of Americans now report getting at least some of their news from social media.  Amid the chorus of those calling for more regulation of tech companies, section 230 has become a consistent target. 

Should hate speech be regulated? 

But if you need convincing as to why the government is not best placed to regulate speech online, look no further than Congress’s own wording in section 230. The section enacted in the mid-90s states that online platforms “offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops” and “a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.”  

Section 230 goes on to declare that it is the “policy of the United States . . . to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet.”  Based on the above, section 230 offers the now infamous liability protection for online platforms.  

From the simple fact that most of what we see on our social media is dictated by algorithms over which we have no control, to the Cambridge Analytica scandal, to increased polarization because of the propagation of fake news on social media, one can quickly see how Congress’s words in 1996 read today as a catalogue of inaccurate predictions. Even Ron Wyden, one of the original drafters of section 230, himself admits today that drafters never expected an “individual endorsing (or denying) the extermination of millions of people, or attacking the victims of horrific crimes or the parents of murdered children” to be enabled through the protections offered by section 230.

It would be hard to argue that today’s Congress – having shown little understanding in recent hearings of how social media operates to begin with – is any more qualified at predicting the effects of regulating speech online twenty years from now.   

More importantly, the burden of complying with new regulations will definitely result in a significant barrier to entry for startups and therefore have the unintended consequence of entrenching incumbents. While Facebook, YouTube, and Twitter may have the resources and infrastructure to handle compliance with increased moderation or pre-vetting of posts that regulations might impose, smaller startups will be at a major disadvantage in keeping up with such a burden.

Last chance before regulation 

The answer has to lie with the online platforms themselves. Over the past two decades, they have amassed a wealth of experience in detecting and taking down hate speech. They have built up formidable teams with varied backgrounds to draft policies that take into account an ever-changing internet. Their profits have enabled them to hire away top talent, from government prosecutors to academics and human rights lawyers.  

These platforms also have been on a hiring spree in the last couple of years to ensure that their product policy teams – the ones that draft policies and oversee their enforcement – are more representative of society at large. Facebook proudly announced that its product policy team now includes “a former rape crisis counselor, an academic who has spent her career studying hate organizations . . . and a teacher.” Gone are the days when a bunch of engineers exclusively decided where to draw the lines. Big tech companies have been taking the drafting and enforcement of their policies ever more seriously.

What they now need to do is take the next step and start to coordinate policies so that those who wish to propagate hate speech can no longer game policies across platforms. Waiting for controversies like Infowars to become a full-fledged PR nightmare before taking concrete action will only increase calls for regulation. Proactively pooling resources when it comes to hate speech policies and establishing industry-wide standards will provide a defensible reason to resist direct government regulation.

The social media giants can also build public trust by helping startups get up to speed on the latest approaches to content moderation. While any industry consortium around coordinating hate speech is certain to be dominated by the largest tech companies, they can ensure that policies are easy to access and widely distributed.

Coordination between fierce competitors may sound counterintuitive. But the common problem of hate speech and the gaming of online platforms by those trying to propagate it call for an industry-wide response. Precedent exists for tech titans coordinating when faced with a common threat. Just last year, Facebook, Microsoft, Twitter, and YouTube formalized their “Global Internet Forum to Counter Terrorism” – a partnership to curb the threat of terrorist content online. Fighting hate speech is no less laudable a goal.

Self-regulation is an immense privilege. To the extent that big tech companies want to hold onto that privilege, they have a responsibility to coordinate the policies that underpin their regulation of speech and to enable startups and smaller tech companies to get access to these policies and enforcement mechanisms.


Source: The Tech Crunch

Read More

MailChimp bans Alex Jones for hateful conduct

Posted by on Aug 7, 2018 in Alex Jones, fake news, hate speech, infowars, Mailchimp, Policy, ToS, United States | 0 comments

Another tech platform has closed the door on InfoWars’ Alex Jones . Mail messaging platform MailChimp first confirmed the move in a statement to US media watchdog Media Matters which said the accounts had been closed for “hateful conduct”. A MailChimp spokeswoman also confirmed it to TechCrunch via email.

In a statement MailChimp said it had terminated InfoWars’ and Jones’ accounts for ToS violations — adding that while it doesn’t usually comment on individual account closures it was making an exception in this case.

“We don’t allow people to use our platform to disseminate hateful content,” it wrote, adding: “We take our responsibility to our customers and employees seriously. The decision to terminate this account was thoughtfully considered and is in line with our company’s values.”

There has been something of a domino effect among tech companies in recent weeks over what to do about Jones/InfoWars, with Facebook, Apple and Google pulling content or shuttering Jones’ channels over ToS violations. Spotify, YouPorn and even Pinterest have also pulled his content for the same reasons. Although Twitter has not — saying Jones has not violated its rules.

Jones, a notorious conspiracy theorist, has peddled anti-truths on his own website for nearly two decades, but has raised his profile and gained greater exposure by using the reach of mainstream tech platforms and tools — enabling him to rabble rouse beyond a niche audience.

As well as spreading toxic disinformation on mainstream social networks, including targeting the victims of the Sandy Hook Elementary school shooting by falsely claiming the massacre was an elaborate hoax, Media Matters notes that Jones has regularly encouraged violence — expounding an impending second U.S. civil war narrative in which he discusses killing minorities.

Jones is spinning the recent tech platform bans as a ‘censorship war’ on him, even as hosting companies continue to provide a platform on the Internet for his website — where he continues to peddle his BS for anyone who wants to listen.


Source: The Tech Crunch

Read More

Spotify becomes the latest tech platform to reject Alex Jones

Posted by on Aug 2, 2018 in Alex Jones, conspiracy, Drama, hate speech, infowars, Podcasts, Spotify | 0 comments

Yesterday, Spotify became the third tech platform in just over a week to take a stance on Alex Jones’s controversial far-right and conspiracy theorist content. The streaming service removed several Infowars podcast episodes due to their violation of the policy against hate content that Spotify released in May. This action follows strikes given against Jones by both YouTube and Facebook for videos containing content that violated those companies’ policies, including Islamophobic and transphobic hate speech and child endangerment.

In a statement to Bloomberg on Wednesday, a spokeswoman for Spotify said the following:

We take reports of hate content seriously and review any podcast episode or song that is flagged by our community. Spotify can confirm it has removed specific episodes of ‘The Alex Jones Show’ podcast for violating our hate content policy.

While Spotify did not reveal the specific episodes removed or the specific terms of the policy they violated, the possibilities for removal cited in its updated policy include “content whose principal purpose is to incite hatred or violence against people because of their race, religion, disability, gender identity, or sexual orientation.” The policy also states that these violations do not necessarily include “offensive, explicit, or vulgar content,” but specifically hate speech with the intention to cause harm.

Other episodes of the Infowars podcast are still available on Spotify, as well as Apple Podcast and Stitcher.

While this strike against Jones does come on the heels of YouTube and Facebook’s previous actions, Jones is not the first to have content removed via Spotify’s new policy. In May, the company pulled music from R. Kelly and rappers XXXTentacion and Tay-K, as well.

To monitor its service for content violating its hate-speech policy, Spotify is collaborating with rights advocacy groups such as The Southern Poverty Law Center, The Anti-Defamation League, Color Of Change, Showing Up for Racial Justice (SURJ), GLAAD, Muslim Advocates and the International Network Against Cyber Hate. Additionally, the company has built an internal monitoring tool called Spotify AudioWatch and is asking users to help flag hate content.

Decisions to police hate speech on tech platforms like Spotify, YouTube and Facebook have stirred up a lot of strong emotions on both sides of the debate. Balanced between open source and private enterprise, the path forward for these companies to create the safest and simultaneously “most free” space for their users is still being precariously trekked daily.


Source: The Tech Crunch

Read More

YouTube punishes Alex Jones’ channel for breaking policies against hate speech and child endangerment

Posted by on Jul 26, 2018 in Alex Jones, Google, hate speech, infowars, TC, YouTube | 0 comments

Google confirmed it has issued a strike against Infowars founder Alex Jones’ YouTube channel for breaking the video platform’s policies against child endangerment and hate speech. Four videos were also removed. The strike means Jones’ channel will not be allowed to live stream for 90 days.

In a statement emailed to reporters, a Google representative said “We have long standing policies against child endangerment and hate speech. We apply our policies consistently according to the content in the videos, regardless of the speaker or the channel. We also have a clear three strikes policy and we terminate channels when they receive three strikes in three months.”

According to The Verge, two of the deleted videos contained hate speech against Muslims, a third had transphobic content and the fourth showed a child being shoved to the ground by a grown man with the headline “how to prevent liberalism.”

The fact that four deleted videos only amounted to one strike against Jones’ channel has prompted scrutiny of YouTube’s moderation policy, with critics arguing that each video that breaks the platform’s rules should warrant its own strike, especially for prolific repeat offenders.

Jones’ channel was issued a strike in February for a video promoting the conspiracy theory that survivors of the Parkland, Florida shooting, which killed 17 people, were actually “crisis actors.” But strikes expire after three months, so the Alex Jones channel currently has only one active strike against it.

While he promotes ideas that are ridiculous and hateful, Jones is influential and Infowars has helped promulgate many pernicious conspiracy theories. For example, he is currently being sued by family members of Sandy Hook victims for claiming that the mass shooting, which killed 27 people, including 20 small children, was staged. Since the shooting in December 2012, victims’ families have been targeted for harassment by conspiracy theorists.

The YouTube strike come a few days after Facebook refused to take down a video of Jones ranting against Robert Mueller, in which he accused the special counsel of committing sex crimes against children and mimed shooting him. Facebook told BuzzFeed News that Jones’ comments in the video, which was posted to his verified page, did not violate community standards because they are not a credible statement of intent to commit violence.

TechCrunch has also contacted Infowars for comment.


Source: The Tech Crunch

Read More