Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Telegram gets 3M new signups during Facebook apps’ outage

Posted by on Mar 14, 2019 in Apps, China, encryption, Europe, Facebook, instagram, internet censorship, Iran, messaging apps, messaging services, Moscow, Pavel Durov, Privacy, russia, Social, Social Media, Telegram, vk | 0 comments

Messaging platform Telegram claims to have had a surge in signups during a period of downtime for Facebook’s rival messaging services.

In a message sent to his Telegram channel, founder Pavel Durov’s just wrote: “I see 3 million new users signed up for Telegram within the last 24 hours.”

It’s probably not a coincidence that Facebook and its related family of apps went down for most of Wednesday, as we reported earlier. At the time of writing Instagram’s service has been officially confirmed restored. Unofficially Facebook also appears to be back online, at least here in Europe.

Durov doesn’t offer an explicit explanation for Telegram’s sudden spike in sign ups, but he does take a thinly veiled swipe at social networking giant Facebook — whose founder recently claimed he now plans to pivot the ad platform to ‘privacy’.

“Good,” adds Durov on his channel, welcoming Telegram’s 3M newbies. “We have true privacy and unlimited space for everyone.”

A contact at Telegram confirmed to TechCrunch that the Facebook apps’ downtime is the likely cause of its latest sign up spike, telling us: “These outages always drive new users.”

Though they also credited growth to “the mainstream overall increasing understanding about Facebook’s abusive attention harvesting practices”.

A year ago Telegram announced passing 200M monthly active users. Though the platform has faced restrictions and/or blocks in some markets (principally Russia and Iran, as well as China) — apparently for refusing government requests for encryption keys and/or user information.

In Durov’s home country of Russia the government is also now moving to tighten Internet restrictions via new legislation — and thousands of people took to the streets in Moscow and other Russian cities this weekend to protest at growing Internet censorship, per Reuters.

Such restrictions could increase demand for Telegram’s encrypted messaging service in the country as the app does appear to still be partially accessible there.

Durov, who famously left Russia in 2014 — stepping away from his home country and an earlier social network he founded (VK.com) because of his stance on free speech — has sought to thwart the Russian government’s Telegram blocks via legal and technical measures.

The Telegram messaging platform has of course also had its own issues with less political downtime too.

In a tweet last fall the company confirmed a server cluster had gone down, potentially affecting users in the Middle East, Africa and Europe. Although in that case the downtime only lasted a few hours.


Source: The Tech Crunch

Read More

The “splinternet” is already here

Posted by on Mar 13, 2019 in alibaba, Asia, Baidu, belgium, Brussels, censorship, chief executive officer, China, Column, corbis, Dragonfly, Eric Schmidt, eu commission, Facebook, firewall, Getty-Images, Google, great firewall, Information technology, Internet, internet access, Iran, Mark Zuckerberg, net neutrality, North Korea, online freedom, open Internet, photographer, russia, Saudi Arabia, search engines, South Korea, Sundar Pichai, Syria, Tencent, United Kingdom, United Nations, United States, Washington D.C., world wide web | 0 comments

There is no question that the arrival of a fragmented and divided internet is now upon us. The “splinternet,” where cyberspace is controlled and regulated by different countries is no longer just a concept, but now a dangerous reality. With the future of the “World Wide Web” at stake, governments and advocates in support of a free and open internet have an obligation to stem the tide of authoritarian regimes isolating the web to control information and their populations.

Both China and Russia have been rapidly increasing their internet oversight, leading to increased digital authoritarianism. Earlier this month Russia announced a plan to disconnect the entire country from the internet to simulate an all-out cyberwar. And, last month China issued two new censorship rules, identifying 100 new categories of banned content and implementing mandatory reviews of all content posted on short video platforms.

While China and Russia may be two of the biggest internet disruptors, they are by no means the only ones. Cuban, Iranian and even Turkish politicians have begun pushing “information sovereignty,” a euphemism for replacing services provided by western internet companies with their own more limited but easier to control products. And a 2017 study found that numerous countries, including Saudi Arabia, Syria and Yemen have engaged in “substantial politically motivated filtering.”

This digital control has also spread beyond authoritarian regimes. Increasingly, there are more attempts to keep foreign nationals off certain web properties.

For example, digital content available to U.K. citizens via the BBC’s iPlayer is becoming increasingly unavailable to Germans. South Korea filters, censors and blocks news agencies belonging to North Korea. Never have so many governments, authoritarian and democratic, actively blocked internet access to their own nationals.

The consequences of the splinternet and digital authoritarianism stretch far beyond the populations of these individual countries.

Back in 2016, U.S. trade officials accused China’s Great Firewall of creating what foreign internet executives defined as a trade barrier. Through controlling the rules of the internet, the Chinese government has nurtured a trio of domestic internet giants, known as BAT (Baidu, Alibaba and Tencent), who are all in lock step with the government’s ultra-strict regime.

The super-apps that these internet giants produce, such as WeChat, are built for censorship. The result? According to former Google CEO Eric Schmidt, “the Chinese Firewall will lead to two distinct internets. The U.S. will dominate the western internet and China will dominate the internet for all of Asia.”

Surprisingly, U.S. companies are helping to facilitate this splinternet.

Google had spent decades attempting to break into the Chinese market but had difficulty coexisting with the Chinese government’s strict censorship and collection of data, so much so that in March 2010, Google chose to pull its search engines and other services out of China. However now, in 2019, Google has completely changed its tune.

Google has made censorship allowances through an entirely different Chinese internet platform called project Dragonfly . Dragonfly is a censored version of Google’s Western search platform, with the key difference being that it blocks results for sensitive public queries.

Sundar Pichai, chief executive officer of Google Inc., sits before the start of a House Judiciary Committee hearing in Washington, D.C., U.S., on Tuesday, Dec. 11, 2018. Pichai backed privacy legislation and denied the company is politically biased, according to a transcript of testimony he plans to deliver. Photographer: Andrew Harrer/Bloomberg via Getty Images

The Universal Declaration of Human Rights states that “people have the right to seek, receive, and impart information and ideas through any media and regardless of frontiers.”

Drafted in 1948, this declaration reflects the sentiment felt following World War II, when people worked to prevent authoritarian propaganda and censorship from ever taking hold the way it once did. And, while these words were written over 70 years ago, well before the age of the internet, this declaration challenges the very concept of the splinternet and the undemocratic digital boundaries we see developing today.

As the web becomes more splintered and information more controlled across the globe, we risk the deterioration of democratic systems, the corruption of free markets and further cyber misinformation campaigns. We must act now to save a free and open internet from censorship and international maneuvering before history is bound to repeat itself.

BRUSSELS, BELGIUM – MAY 22: An Avaaz activist attends an anti-Facebook demonstration with cardboard cutouts of Facebook chief Mark Zuckerberg, on which is written “Fix Fakebook”, in front of the Berlaymont, the EU Commission headquarter on May 22, 2018 in Brussels, Belgium. Avaaz.org is an international non-governmental cybermilitating organization, founded in 2007. Presenting itself as a “supranational democratic movement,” it says it empowers citizens around the world to mobilize on various international issues, such as human rights, corruption or poverty. (Photo by Thierry Monasse/Corbis via Getty Images)

The Ultimate Solution

Similar to the UDHR drafted in 1948, in 2016, the United Nations declared “online freedom” to be a fundamental human right that must be protected. While not legally binding, the motion passed with consensus, and therefore the UN was provided limited power to endorse an open internet (OI) system. Through selectively applying pressure on governments who are not compliant, the UN can now enforce digital human rights standards.

The first step would be to implement a transparent monitoring system which ensures that the full resources of the internet, and ability to operate on it, are easily accessible to all citizens. Countries such as North Korea, China, Iran and Syria, who block websites and filter email plus social media communication, would be encouraged to improve through the imposition of incentives and consequences.

All countries would be ranked on their achievement of multiple positive factors including open standards, lack of censorship, and low barriers to internet entry. A three tier open internet ranking system would divide all nations into Free, Partly Free or Not Free. The ultimate goal would be to have all countries gradually migrate towards the Free category, allowing all citizens full information across the WWW, equally free and open without constraints.

The second step would be for the UN to align itself much more closely with the largest western internet companies. Together they could jointly assemble detailed reports on each government’s efforts towards censorship creep and government overreach. The global tech companies are keenly aware of which specific countries are applying pressure for censorship and the restriction of digital speech. Together, the UN and global tech firms would prove strong adversaries, protecting the citizens of the world. Every individual in every country deserves to know what is truly happening in the world.

The Free countries with an open internet, zero undue regulation or censorship would have a clear path to tremendous economic prosperity. Countries who remain in the Not Free tier, attempting to impose their self-serving political and social values would find themselves completely isolated, visibly violating digital human rights law.

This is not a hollow threat. A completely closed off splinternet will inevitably lead a country to isolation, low growth rates, and stagnation.


Source: The Tech Crunch

Read More

Whom to Elect for a Foreign Policy Crisis at 3 A.M.?

Posted by on Mar 13, 2019 in China, Cold War Era, Cyberattacks and Hackers, International Relations, Iran, russia, United States | 0 comments

So far, no Democratic candidate is claiming to be ready.
Source: New York Times

Read More

Washington Memo: Mueller Report Has Washington Spinning (and It’s Not Even Filed)

Posted by on Mar 12, 2019 in Barr, William P, Impeachment, Mueller, Robert S III, Presidential Election of 2016, russia, Russian Interference in 2016 US Elections and Ties to Trump Associates, Special Prosecutors (Independent Counsel), Trump, Donald J, United States Politics and Government | 0 comments

Like a becalmed ship in the dead air before a coming storm, the nation’s capital is waiting, and waiting, for the special counsel’s report.
Source: New York Times

Read More

UK parliament calls for antitrust, data abuse probe of Facebook

Posted by on Feb 18, 2019 in Advertising Tech, app developers, Artificial Intelligence, ashkan soltani, business model, Cambridge Analytica, competition law, data protection law, DCMS committee, election law, Europe, Facebook, Federal Trade Commission, GSR, information commissioner's office, Mark Zuckerberg, Mike Schroepfer, Moscow, Policy, Privacy, russia, Security, Social, Social Media, social media platforms, United Kingdom, United States | 0 comments

A final report by a British parliamentary committee which spent months last year investigating online political disinformation makes very uncomfortable reading for Facebook — with the company singled out for “disingenuous” and “bad faith” responses to democratic concerns about the misuse of people’s data.

In the report, published today, the committee has also called for Facebook’s use of user data to be investigated by the UK’s data watchdog.

In an evidence session to the committee late last year, the Information Commissioner’s Office (ICO) suggested Facebook needs to change its business model — warning the company risks burning user trust for good.

Last summer the ICO also called for an ethical pause of social media ads for election campaigning, warning of the risk of developing “a system of voter surveillance by default”.

Interrogating the distribution of ‘fake news’

The UK parliamentary enquiry looked into both Facebook’s own use of personal data to further its business interests, such as by providing access to users’ data to developers and advertisers in order to increase revenue and/or usage of its own platform; and examined what Facebook claimed as ‘abuse’ of its platform by the disgraced (and now defunct) political data company Cambridge Analytica — which in 2014 paid a developer with access to Facebook’s developer platform to extract information on millions of Facebook users in build voter profiles to try to influence elections.

The committee’s conclusion about Facebook’s business is a damning one with the company accused of operating a business model that’s predicated on selling abusive access to people’s data.

Far from Facebook acting against “sketchy” or “abusive” apps, of which action it has produced no evidence at all, it, in fact, worked with such apps as an intrinsic part of its business model,” the committee argues. This explains why it recruited the people who created them, such as Joseph Chancellor [the co-founder of GSR, the developer which sold Facebook user data to Cambridge Analytica]. Nothing in Facebook’s actions supports the statements of Mark Zuckerberg who, we believe, lapsed into “PR crisis mode”, when its real business model was exposed.

“This is just one example of the bad faith which we believe justifies governments holding a business such as Facebook at arms’ length. It seems clear to us that Facebook acts only when serious breaches become public. This is what happened in 2015 and 2018.”

“We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that ‘we’ve never sold anyone’s data” is simply untrue’,” the committee also concludes.

We’ve reached out to Facebook for comment on the committee’s report. Update: Facebook said it rejects all claims it breached data protection and competition laws.

In a statement attributed to UK public policy manager, Karim Palant, the company told us:

We share the Committee’s concerns about false news and election integrity and are pleased to have made a significant contribution to their investigation over the past 18 months, answering more than 700 questions and with four of our most senior executives giving evidence.

We are open to meaningful regulation and support the committee’s recommendation for electoral law reform. But we’re not waiting. We have already made substantial changes so that every political ad on Facebook has to be authorised, state who is paying for it and then is stored in a searchable archive for 7 years. No other channel for political advertising is as transparent and offers the tools that we do.

We also support effective privacy legislation that holds companies to high standards in their use of data and transparency for users.

While we still have more to do, we are not the same company we were a year ago. We have tripled the size of the team working to detect and protect users from bad content to 30,000 people and invested heavily in machine learning, artificial intelligence and computer vision technology to help prevent this type of abuse.

Last fall Facebook was issued the maximum possible fine under relevant UK data protection law for failing to safeguard user data from Cambridge Analytica saga. Although it is appealing the ICO’s penalty, claiming there’s no evidence UK users’ data got misused.

During the course of a multi-month enquiry last year investigating disinformation and fake news, the Digital, Culture, Media and Sport (DCMS) committee heard from 73 witnesses in 23 oral evidence sessions, as well as taking in 170 written submissions. In all the committee says it posed more than 4,350 questions.

Its wide-ranging, 110-page report makes detailed observations on a number of technologies and business practices across the social media, adtech and strategic communications space, and culminates in a long list of recommendations for policymakers and regulators — reiterating its call for tech platforms to be made legally liable for content.

Among the report’s main recommendations are:

  • clear legal liabilities for tech companies to act against “harmful or illegal content”, with the committee calling for a compulsory Code of Ethics overseen by a independent regulatory with statutory powers to obtain information from companies; instigate legal proceedings and issue (“large”) fines for non-compliance
  • privacy law protections to cover inferred data so that models used to make inferences about individuals are clearly regulated under UK data protection rules
  • a levy on tech companies operating in the UK to support enhanced regulation of such platforms
  • a call for the ICO to investigate Facebook’s platform practices and use of user data
  • a call for the Competition Markets Authority to comprehensively “audit” the online advertising ecosystem, and also to investigate whether Facebook specifically has engaged in anti-competitive practices
  • changes to UK election law to take account of digital campaigning, including “absolute transparency of online political campaigning” — including “full disclosure of the targeting used” — and more powers for the Electoral Commission
  • a call for a government review of covert digital influence campaigns by foreign actors (plus a review of legislation in the area to consider if it’s adequate) — including the committee urging the government to launch independent investigations of recent past elections to examine “foreign influence, disinformation, funding, voter manipulation, and the sharing of data, so that appropriate changes to the law can be made and lessons can be learnt for future elections and referenda”
  • a requirement on social media platforms to develop tools to distinguish between “quality journalism” and low quality content sources, and/or work with existing providers to make such services available to users

Among the areas the committee’s report covers off with detailed commentary are data use and targeting; advertising and political campaigning — including foreign influence; and digital literacy.

It argues that regulation is urgently needed to restore democratic accountability and “make sure the people stay in charge of the machines”.

“Protecting our data helps us secure the past, but protecting inferences and uses of Artificial Intelligence (AI) is what we will need to protect our future,” the committee warns.

Ministers are due to produce a White Paper on social media safety regulation this winter and the committee writes that it hopes its recommendations will inform government thinking.

“Much has been said about the coarsening of public debate, but when these factors are brought to bear directly in election campaigns then the very fabric of our democracy is threatened,” says the committee. “This situation is unlikely to change. What does need to change is the enforcement of greater transparency in the digital sphere, to ensure that we know the source of what we are reading, who has paid for it and why the information has been sent to us. We need to understand how the big tech companies work and what happens to our data.”

The report calls for tech companies to be regulated as a new category, “not necessarily either a ‘platform’ or a ‘publisher”, but which legally tightens their liability for harmful content published on their platforms.

Last month another UK parliamentary committee also urged the government to place a legal ‘duty of care’ on platforms to protect users under the age of 18. The government said then that it has not ruled out doing so.

We’ve reached out to the DCMS for a response to the latest committee report. Update: A department spokesperson told us:

The Government’s forthcoming White Paper on Online Harms will set out a new framework for ensuring disinformation is tackled effectively, while respecting freedom of expression and promoting innovation.

This week the Culture Secretary will travel to the United States to meet with tech giants including Google, Facebook, Twitter and Apple to discuss many of these issues.

We welcome this report’s contribution towards our work to tackle the increasing threat of disinformation and to make the UK the safest place to be online. We will respond in due course.

“Digital gangsters”

Competition concerns are also raised several times by the committee.

“Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law,” the DCMS committee writes, going on to urge the government to investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of its business practices towards other developers “to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail”. 

“The big tech companies must not be allowed to expand exponentially, without constraint or proper regulatory oversight,” it adds.

The committee suggests existing legal tools are up to the task of reining in platform power, citing privacy laws, data protection legislation, antitrust and competition law — and calling for a “comprehensive audit” of the social media advertising market by the UK’s Competition and Markets Authority, and a specific antitrust probe of Facebook’s business practices.

“If companies become monopolies they can be broken up, in whatever sector,” the committee points out. “Facebook’s handling of personal data, and its use for political campaigns, are prime and legitimate areas for inspection by regulators, and it should not be able to evade all editorial responsibility for the content shared by its users across its platforms.”

The social networking giant was the recipient of many awkward queries during the course of the committee’s enquiry but it refused repeated requests for its founder Mark Zuckerberg to testify — sending a number of lesser staffers in his stead.

That decision continues to be seized upon by the committee as evidence of a lack of democratic accountability. It also accuses Facebook of having an intentionally “opaque management structure”.

“By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world,” the committee writes.

“The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which—unsurprisingly—failed to address all of our questions. We are left in no doubt that this strategy was deliberate.”

It doubles down on the accusation that Facebook sought to deliberately mislead its enquiry — pointing to incorrect and/or inadequate responses from staffers who did testify.

“We are left with the impression that either [policy VP] Simon Milner and [CTO] Mike Schroepfer deliberately misled the Committee or they were deliberately not briefed by senior executives at Facebook about the extent of Russian interference in foreign elections,” it suggests.

In an unusual move late last year the committee used rare parliamentary powers to seize a cache of documents related to an active US lawsuit against Facebook filed by an app developer called Six4Three.

The cache of documents is referenced extensively in the final report, and appears to have fuelled antitrust concerns, with the committee arguing that the evidence obtained from the internal company documents “indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers… of that data, thereby causing them to lose their business”.

“It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission [privacy] settlement,” the committee also argues, citing evidence from the former chief technologist of the FTC, Ashkan Soltani .

On Soltani’s evidence, it writes:

Ashkan Soltani rejected [Facebook’s] claim, saying that up until 2012, platform controls did not exist, and privacy controls did not apply to apps. So even if a user set their profile to private, installed apps would still be able to access information. After 2012, Facebook added platform controls and made privacy controls applicable to apps. However, there were ‘whitelisted’ apps that could still access user data without permission and which, according to Ashkan Soltani, could access friends’ data for nearly a decade before that time. Apps were able to circumvent users’ privacy of platform settings and access friends’ information, even when the user disabled the Platform. This was an example of Facebook’s business model driving privacy violations.

While Facebook is singled out for the most eviscerating criticism in the report (and targeted for specific investigations), the committee’s long list of recommendations are addressed at social media businesses and online advertisers generally.

It also calls for far more transparency from platforms, writing that: “Social media companies need to be more transparent about their own sites, and how they work. Rather than hiding behind complex agreements, they should be informing users of how their sites work, including curation functions and the way in which algorithms are used to prioritise certain stories, news and videos, depending on each user’s profile. The more people know how the sites work, and how the sites use individuals’ data, the more informed we shall all be, which in turn will make choices about the use and privacy of sites easier to make.”

The committee also urges a raft of updates to UK election law — branding it “not fit for purpose” in the digital era.

Its interim report, published last summer, made many of the same recommendations.

Russian interest

But despite pressing the government for urgent action there was only a cool response from ministers then, with the government remaining tied up trying to shape a response to the 2016 Brexit referendum vote which split the country (with social media’s election-law-deforming help). Instead it opted for a ‘wait and see‘ approach.

The government accepted just three of the preliminary report’s forty-two recommendations outright, and fully rejected four.

Nonetheless, the committee has doubled down on its preliminary conclusions, reiterating earlier recommendations and pushing the government once again to act.

It cites fresh evidence, including from additional testimony, as well as pointing to other reports (such as the recently published Cairncross Review) which it argues back up some of the conclusions reached. 

“Our inquiry over the last year has identified three big threats to our society. The challenge for the year ahead is to start to fix them; we cannot delay any longer,” writes Damian Collins MP and chair of the DCMS Committee, in a statement. “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use every day. Much of this is directed from agencies working in foreign countries, including Russia.

“The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights. Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers who rely on this platform to reach their customers.”

“These are issues that the major tech companies are well aware of, yet continually fail to address. The guiding principle of the ‘move fast and break things’ culture often seems to be that it is better to apologise than ask permission. We need a radical shift in the balance of power between the platforms and the people,” he added.

“The age of inadequate self-regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator.”

The committee says it expects the government to respond to its recommendations within two months — noting rather dryly: “We hope that this will be much more comprehensive, practical, and constructive than their response to the Interim Report, published in October 2018. Several of our recommendations were not substantively answered and there is now an urgent need for the Government to respond to them.”

It also makes a point of including an analysis of Internet traffic to the government’s own response to its preliminary report last year — in which it highlights a “high proportion” of online visitors hailing from Russian cities including Moscow and Saint Petersburg…

Source: Web and publications unit, House of Commons

“This itself demonstrates the very clear interest from Russia in what we have had to say about their activities in overseas political campaigns,” the committee remarks, criticizing the government response to its preliminary report for claiming there’s no evidence of “successful” Russian interference in UK elections and democratic processes.

“It is surely a sufficient matter of concern that the Government has acknowledged that interference has occurred, irrespective of the lack of evidence of impact. The Government should be conducting analysis to understand the extent of Russian targeting of voters during elections,” it adds.

Three senior managers knew

Another interesting tidbit from the report is confirmation that the ICO has shared the names of three “senior managers” at Facebook who knew about the Cambridge Analytica data breach prior to the first press report in December 2015 — which is the date Facebook has repeatedly told the committee was when it first learnt of the breach, contradicting what the ICO found via its own investigations.

The committee’s report does not disclose the names of the three senior managers — saying the ICO has asked the names to remain confidential (we’ve reached out to the ICO to ask why it is not making this information public) — and implies the execs did not relay the information to Zuckerberg.

The committee dubs this as an example of “a profound failure” of internal governance, also branding it as evidence of “fundamental weakness” in how Facebook manages its responsibilities to users.

Here’s the committee’s account of that detail:

We were keen to know when and which people working at Facebook first knew about the GSR/Cambridge Analytica breach. The ICO confirmed, in correspondence with the Committee, that three “senior managers” were involved in email exchanges earlier in 2015 concerning the GSR breach before December 2015, when it was first reported by The Guardian. At the request of the ICO, we have agreed to keep the names confidential, but it would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case.

The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.

This report was updated with comment from Facebook and the UK government


Source: The Tech Crunch

Read More

With cybersecurity threats looming, the government shutdown is putting America at risk

Posted by on Jan 24, 2019 in agriculture, america, China, Column, computer security, cybercrime, Cyberwarfare, Department of Homeland Security, Federal government, Finance, Food, Government, Internal Revenue Service, Iran, national security, North Korea, presidential election, russia, Security, United States | 0 comments

Putting political divisions and affiliations aside, the government partially shutting down for the third time over the last year is extremely worrisome, particularly when considering its impact on the nation’s cybersecurity priorities. Unlike the government, our nation’s enemies don’t “shut down.” When our nation’s cyber centers are not actively monitoring and protecting our most valuable assets and critical infrastructure, threats magnify and vulnerabilities become further exposed.

While Republicans and Democrats continue to butt heads over border security, the vital agencies tasked with properly safeguarding our nation from our adversaries are stuck in operational limbo. Without this protection in full force acting around the clock, serious extraneous threats to government agencies and private businesses can thrive. This shutdown, now into its fourth week, has crippled key U.S. agencies, most notably the Department of Homeland Security, imperiling our nation’s cybersecurity defenses.

Consider the Cybersecurity and Infrastructure Security Agency, which has seen nearly 37 percent of its staff furloughed. This agency leads efforts to protect and defend critical infrastructure, as it pertains to industries as varied as energy, finance, food and agriculture, transportation and defense.

As defined in the 2001 Patriot Act, critical infrastructure is such that, “the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.” In the interest of national security, we simply cannot tolerate prolonged vulnerability in these areas.

Employees who are considered “essential” are still on the job, but the loss of supporting staff could prove to be costly, in both the short and long term. More immediately, the shutdown places a greater burden on the employees deemed essential enough to stick around. These employees are tasked with both longer hours and expanded responsibilities, leading to a higher risk of critical oversight and mission failure, as weary agents find themselves increasingly stretched beyond their capabilities.

The long-term effects, however, are quite frankly, far more alarming. There’s a serious possibility our brightest minds in cybersecurity will consider moving to the private sector following a shutdown of this magnitude. Even ignoring that the private sector pays better, furloughed staff are likely to reconsider just how valued they are in their current roles. After the 2013 shutdown, a significant segment of the intelligence community left their posts for the relative stability of corporate America. The current shutdown bears those risks as well. A loss of critical personnel could result in institutional failure far beyond the present shutdown, leading to cascading security deterioration.

This shutdown has farther-reaching effects for the federal government to attract talent in the form of recent college grads or those interested in transitioning from the private sector. The stability of government was once viewed as a guarantee compared to the private sector, but work could incentivize workers to take their talents to the private sector.

The IRS in particular is extremely vulnerable, putting America’s private sector and your average taxpayer directly in the cross-hairs. The shutdown has come at the worst time of the year, as the holidays and the post-holiday season tend to have the highest rates for cybercrime. In 2018, the IRS reported a 60 percent increase in email scams. Meanwhile, as the IRS furloughed much of its staff as well, cyber criminals are likely to ramp up their activity even more.

Though the agency has stated it will recall a “significant portion” of its personnel to work without pay, it has also indicated there will be a lack of support for much beyond essential service. There’s no doubt cybercriminals will see this as a lucrative opportunity. With tax season on the horizon, the gap in oversight will feed directly into cyber criminals’ playing the field, undoubtedly resulting in escalating financial losses due to tax identity theft and refund fraud.

Cyberwarfare is no longer some distant afterthought, practiced and discussed by a niche group of experts in a back room. Cyberwarfare has taken center stage on the virtual battlefield. Geopolitical adversaries such as North Korea, Russia, Iran and China rely on cyber as their most agile and dangerous weapon against the United States. These hostile nation-states salivate at the idea of a prolonged government shutdown.

From Russian interference in the 2016 presidential election to Chinese state cybercriminals breaching Marriott Hotels, the necessity to protect our national cybersecurity has never been more explicit.

If our government doesn’t resolve this dilemma quickly, America’s cybersecurity will undoubtedly suffer serious deterioration, inevitably endangering the lives and safety of citizens across the nation. This issue goes far beyond partisan politics, yet needs both parties to come to a consensus immediately. Time is not on our side.


Source: The Tech Crunch

Read More

Facebook finds and kills another 512 Kremlin-linked fake accounts

Posted by on Jan 17, 2019 in central europe, disinformation, eastern europe, estonia, Europe, Facebook, fake accounts, instagram, internet research agency, Kazakhstan, Lithuania, Moscow, presidential election, Propaganda, putin, Romania, rt, russia, Security, Social, Social Media, Sputnik, Ukraine, United States | 0 comments

Two years on from the U.S. presidential election, Facebook continues to have a major problem with Russian disinformation being megaphoned via its social tools.

In a blog post today the company reveals another tranche of Kremlin-linked fake activity — saying it’s removed a total of 471 Facebook pages and accounts, as well as 41 Instagram accounts, which were being used to spread propaganda in regions where Putin’s regime has sharp geopolitical interests.

In its latest reveal of “coordinated inauthentic behavior” — aka the euphemism Facebook uses for disinformation campaigns that rely on its tools to generate a veneer of authenticity and plausibility in order to pump out masses of sharable political propaganda — the company says it identified two operations, both originating in Russia, and both using similar tactics without any apparent direct links between the two networks.

One operation was targeting Ukraine specifically, while the other was active in a number of countries in the Baltics, Central Asia, the Caucasus, and Central and Eastern Europe.

“We’re taking down these Pages and accounts based on their behavior, not the content they post,” writes Facebook’s Nathaniel Gleicher, head of cybersecurity policy. “In these cases, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action.”

Sputnik link

Discussing the Russian disinformation op targeting multiple countries, Gleicher says Facebook found what looked like innocuous or general interest pages to be linked to employees of Kremlin propaganda outlet Sputnik, with some of the pages encouraging protest movements and pushing other Putin lines.

“The Page administrators and account owners primarily represented themselves as independent news Pages or general interest Pages on topics like weather, travel, sports, economics, or politicians in Romania, Latvia, Estonia, Lithuania, Armenia, Azerbaijan, Georgia, Tajikistan, Uzbekistan, Kazakhstan, Moldova, Russia, and Kyrgyzstan,” he writes. “Despite their misrepresentations of their identities, we found that these Pages and accounts were linked to employees of Sputnik, a news agency based in Moscow, and that some of the Pages frequently posted about topics like anti-NATO sentiment, protest movements, and anti-corruption.”

Facebook has included some sample posts from the removed accounts in the blog which show a mixture of imagery being deployed — from a photo of a rock concert, to shots of historic buildings and a snowy scene, to obviously militaristic and political protest imagery.

In all Facebook says it removed 289 Pages and 75 Facebook accounts associated with this Russian disop; adding that around 790,000 accounts followed one or more of the removed Pages.

It also reveals that it received around $135,000 for ads run by the Russian operators (specifying this was paid for in euros, rubles, and U.S. dollars).

“The first ad ran in October 2013, and the most recent ad ran in January 2019,” it notes, adding: “We have not completed a review of the organic content coming from these accounts.”

These Kremlin-linked Pages also hosted around 190 events — with the first scheduled for August 2015, according to Facebook, and the most recent scheduled for January 2019. “Up to 1,200 people expressed interest in at least one of these events. We cannot confirm whether any of these events actually occurred,” it further notes.

Facebook adds that open source reporting and work by partners which investigate disinformation helped identify the network.

It also says it has shared information about the investigation with U.S. law enforcement, the U.S. Congress, other technology companies, and policymakers in impacted countries.

Ukraine tip-off

In the case of the Ukraine-targeted Russian disop, Facebook says it removed a total of 107 Facebook Pages, Groups, and accounts, and 41 Instagram accounts, specifying that it was acting on an initial tip off from U.S. law enforcement.

In all it says around 180,000 Facebook accounts were following one or more of the removed pages. While the fake Instagram accounts were being followed by more than 55,000 accounts.  

Again Facebook received money from the disinformation purveyors, saying it took in around $25,000 in ad spending on Facebook and Instagram in this case — all paid for in rubles this time — with the first ad running in January 2018, and the most recent in December 2018. (Again it says it has not completed a review of content the accounts were generating.)

“The individuals behind these accounts primarily represented themselves as Ukrainian, and they operated a variety of fake accounts while sharing local Ukrainian news stories on a variety of topics, such as weather, protests, NATO, and health conditions at schools,” writes Gleicher. “We identified some technical overlap with Russia-based activity we saw prior to the US midterm elections, including behavior that shared characteristics with previous Internet Research Agency (IRA) activity.”

In the Ukraine case it says it found no Events being hosted by the pages.

“Our security efforts are ongoing to help us stay a step ahead and uncover this kind of abuse, particularly in light of important political moments and elections in Europe this year,” adds Gleicher. “We are committed to making improvements and building stronger partnerships around the world to more effectively detect and stop this activity.”

A month ago Facebook also revealed it had removed another batch of politically motivated fake accounts. In that case the network behind the pages had been working to spread misinformation in Bangladesh 10 days before the country’s general elections.

This week it also emerged the company is extending some of its nascent election security measures by bringing in requirements for political advertisers to more international markets ahead of major elections in the coming months, such as checks that a political advertiser is located in the country.

However in other countries which also have big votes looming this year Facebook has yet to announced any measures to combat politically charged fakes.


Source: The Tech Crunch

Read More

Ukraine, After Naval Clash With Russia, Considers Martial Law

Posted by on Nov 26, 2018 in Crimea (Ukraine), Federal Security Service, Kerch Strait, Law of the Sea (UN Convention), Martial Law, russia, Ukraine | 0 comments

Ukraine’s navy says Russia opened fire on its ships in a disputed area in the Black Sea, escalating the conflict between the two nations.
Source: New York Times

Read More

Khashoggi’s fate shows the flip side of the surveillance state

Posted by on Oct 20, 2018 in Edward Snowden, Government, Jamal Khashoggi, law enforcement, mass surveillance, Mohammed Bin Salman, national security, Privacy, russia, Saudi Arabia, Security, Softbank, Storage, surveillance, TC, trump, Turkey, Venture Capital, Vision Fund, Visual Computing | 0 comments

It’s been over five years since NSA whistleblower Edward Snowden lifted the lid on government mass surveillance programs, revealing, in unprecedented detail, quite how deep the rabbit hole goes thanks to the spread of commercial software and connectivity enabling a bottomless intelligence-gathering philosophy of ‘bag it all’.

Yet technology’s onward march has hardly broken its stride.

Government spying practices are perhaps more scrutinized, as a result of awkward questions about out-of-date legal oversight regimes. Though whether the resulting legislative updates, putting an official stamp of approval on bulk and/or warrantless collection as a state spying tool, have put Snowden’s ethical concerns to bed seems doubtful — albeit, it depends on who you ask.

The UK’s post-Snowden Investigatory Powers Act continues to face legal challenges. And the government has been forced by the courts to unpick some of the powers it helped itself to vis-à-vis people’s data. But bulk collection, as an official modus operandi, has been both avowed and embraced by the state.

In the US, too, lawmakers elected to push aside controversy over a legal loophole that provides intelligence agencies with a means for the warrantless surveillance of American citizens — re-stamping Section 702 of FISA for another six years. So of course they haven’t cared a fig for non-US citizens’ privacy either.

Increasingly powerful state surveillance is seemingly here to stay, with or without adequately robust oversight. And commercial use of strong encryption remains under attack from governments.

But there’s another end to the surveillance telescope. As I wrote five years ago, those who watch us can expect to be — and indeed are being — increasingly closely watched themselves as the lens gets turned on them:

“Just as our digital interactions and online behaviour can be tracked, parsed and analysed for problematic patterns, pertinent keywords and suspicious connections, so too can the behaviour of governments. Technology is a double-edged sword – which means it’s also capable of lifting the lid on the machinery of power-holding institutions like never before.”

We’re now seeing some of the impacts of this surveillance technology cutting both ways.

With attention to detail, good connections (in all senses) and the application of digital forensics all sorts of discrete data dots can be linked — enabling official narratives to be interrogated and unpicked with technology-fuelled speed.

Witness, for example, how quickly the Kremlin’s official line on the Skripal poisonings unravelled.

After the UK released CCTV of two Russian suspects of the Novichok attack in Salisbury, last month, the speedy counter-claim from Russia, presented most obviously via an ‘interview’ with the two ‘citizens’ conducted by state mouthpiece broadcaster RT, was that the men were just tourists with a special interest in the cultural heritage of the small English town.

Nothing to see here, claimed the Russian state, even though the two unlikely tourists didn’t appear to have done much actual sightseeing on their flying visit to the UK during the tail end of a British winter (unless you count vicarious viewing of Salisbury’s wikipedia page).

But digital forensics outfit Bellingcat, partnering with investigative journalists at The Insider Russia, quickly found plenty to dig up online, and with the help of data-providing tips. (We can only speculate who those whistleblowers might be.)

Their investigation made use of a leaked database of Russian passport documents; passport scans provided by sources; publicly available online videos and selfies of the suspects; and even visual computing expertise to academically cross-match photos taken 15 years apart — to, within a few weeks, credibly unmask the ‘tourists’ as two decorated GRU agents: Anatoliy Chepiga and Dr Alexander Yevgeniyevich Mishkin.

When public opinion is faced with an official narrative already lacking credibility that’s soon set against external investigation able to closely show workings and sources (where possible), and thus demonstrate how reasonably constructed and plausible is the counter narrative, there’s little doubt where the real authority is being shown to lie.

And who the real liars are.

That the Kremlin lies is hardly news, of course. But when its lies are so painstakingly and publicly unpicked, and its veneer of untruth ripped away, there is undoubtedly reputational damage to the authority of Vladimir Putin.

The sheer depth and availability of data in the digital era supports faster-than-ever evidence-based debunking of official fictions, threatening to erode rogue regimes built on lies by pulling away the curtain that invests their leaders with power in the first place — by implying the scope and range of their capacity and competency is unknowable, and letting other players on the world stage accept such a ‘leader’ at face value.

The truth about power is often far more stupid and sordid than the fiction. So a powerful abuser, with their workings revealed, can be reduced to their baser parts — and shown for the thuggish and brutal operator they really are, as well as proved a liar.

On the stupidity front, in another recent and impressive bit of cross-referencing, Bellingcat was able to turn passport data pertaining to another four GRU agents — whose identities had been made public by Dutch and UK intelligence agencies (after they had been caught trying to hack into the network of the Organisation for the Prohibition of Chemical Weapons) — into a long list of 305 suggestively linked individuals also affiliated with the same GRU military unit, and whose personal data had been sitting in a publicly available automobile registration database… Oops.

There’s no doubt certain governments have wised up to the power of public data and are actively releasing key info into the public domain where it can be poured over by journalists and interested citizen investigators — be that CCTV imagery of suspects or actual passport scans of known agents.

A cynic might call this selective leaking. But while the choice of what to release may well be self-serving, the veracity of the data itself is far harder to dispute. Exactly because it can be cross-referenced with so many other publicly available sources and so made to speak for itself.

Right now, we’re in the midst of another fast-unfolding example of surveillance apparatus and public data standing in the way of dubious state claims — in the case of the disappearance of Washington Post journalist Jamal Khashoggi, who went into the Saudi consulate in Istanbul on October 2 for a pre-arranged appointment to collect papers for his wedding and never came out.

Saudi authorities first tried to claim Khashoggi left the consulate the same day, though did not provide any evidence to back up their claim. And CCTV clearly showed him going in.

Yesterday they finally admitted he was dead — but are now trying to claim he died quarrelling in a fistfight, attempting to spin another after-the-fact narrative to cover up and blame-shift the targeted slaying of a journalist who had written critically about the Saudi regime.

Since Khashoggi went missing, CCTV and publicly available data has also been pulled and compared to identify a group of Saudi men who flew into Istanbul just prior to his appointment at the consulate; were caught on camera outside it; and left Turkey immediately after he had vanished.

Including naming a leading Saudi forensics doctor, Dr Salah Muhammed al-Tubaigy, as being among the party that Turkish government sources also told journalists had been carrying a bone saw in their luggage.

Men in the group have also been linked to Saudi crown prince Mohammed bin Salman, via cross-referencing travel records and social media data.

“In a 2017 video published by the Saudi-owned Al Ekhbariya on YouTube, a man wearing a uniform name tag bearing the same name can be seen standing next to the crown prince. A user with the same name on the Saudi app Menom3ay is listed as a member of the royal guard,” writes the Guardian, joining the dots on another suspected henchman.

A marked element of the Khashoggi case has been the explicit descriptions of his fate leaked to journalists by Turkish government sources, who have said they have recordings of his interrogation, torture and killing inside the building — presumably via bugs either installed in the consulate itself or via intercepts placed on devices held by the individuals inside.

This surveillance material has reportedly been shared with US officials, where it must be shaping the geopolitical response — making it harder for President Trump to do what he really wants to do, and stick like glue to a regional US ally with which he has his own personal financial ties, because the arms of that state have been recorded in the literal act of cutting off the fingers and head of a critical journalist, and then sawing up and disposing of the rest of his body.

Attempts by the Saudis to construct a plausible narrative to explain what happened to Khashoggi when he stepped over its consulate threshold to pick up papers for his forthcoming wedding have failed in the face of all the contrary data.

Meanwhile, the search for a body goes on.

And attempts by the Saudis to shift blame for the heinous act away from the crown prince himself are also being discredited by the weight of data…

And while it remains to be seen what sanctions, if any, the Saudis will face from Trump’s conflicted administration, the crown prince is already being hit where it hurts by the global business community withdrawing in horror from the prospect of being tainted by bloody association.

The idea that a company as reputation-sensitive as Apple would be just fine investing billions more alongside the Saudi regime, in SoftBank’s massive Vision Fund vehicle, seems unlikely, to say the least.

Thanks to technology’s surveillance creep the world has been given a close-up view of how horrifyingly brutal the Saudi regime can be — and through the lens of an individual it can empathize with and understand.

Safe to say, supporting second acts for regimes that cut off fingers and sever heads isn’t something any CEO would want to become famous for.

The power of technology to erode privacy is clearer than ever. Down to the very teeth of the bone saw. But what’s also increasingly clear is that powerful and at times terrible capability can be turned around to debase power itself — when authorities themselves become abusers.

So the flip-side of the surveillance state can be seen in the public airing of the bloody colors of abusive regimes.

Turns out, microscopic details can make all the difference to geopolitics.

RIP Jamal Khashoggi


Source: The Tech Crunch

Read More

Wife of Former N.R.A. President Tapped Accused Russian Agent in Pursuit of Jet Fuel Payday

Posted by on Sep 2, 2018 in American University, Butina, Mariia, Conservatism (US Politics), Foreign Students (in US), Gordon, J D, Gun Control, Keene, David A, Lobbying and Lobbyists, National Rifle Assn, Norquist, Grover G, Oil (Petroleum) and Gasoline, Putin, Vladimir V, Republican Party, russia, Sanford, Mark, Trump, Donald J Jr, United States, Virginia, Washington (DC) | 0 comments

Maria Butina surrounded herself with prominent American conservatives and dubious characters bent on making a fast buck. It was not always easy to tell one from the other.
Source: New York Times

Read More