Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Facebook sues analytics firm Rankwave over data misuse

Posted by on May 11, 2019 in Apps, Cambridge Analytica, Facebook, facebook platform, Facebook Policy, Lawsuit, Mobile, Policy, Social, TC | 0 comments

Facebook might have another Cambridge Analytica on its hands. In a late Friday news dump, Facebook revealed that today it filed a lawsuit alleging South Korean analytics firm Rankwave abused its developer platform’s data, and has refused to cooperate with a mandatory compliance audit and request to delete the data.

Facebook’s lawsuit centers around Rankwave offering to help businesses build a Facebook authorization step into their apps so they can pass all the user data to Rankwave, which then analyzes biographic and behavioral traits to supply user contact info and ad targeting assistance to the business. Rankwave also apparently misused data sucked in by its own consumer app for checking your social media “influencer score”. That app could pull data about your Facebook activity such as location checkins, determine that you’ve checked into a baseball stadium, and then Rankwave could help its clients target you with ads for baseball tickets.

The use of a seemingly fun app to slurp up user data and repurpose it for other business goals is strikingly similar to how Cambridge Analytica’s personality quiz app tempted millions of users to provide data about themselves and their friends.

Rankwave touts its Facebook data usage in this 2014 pitch deck

TechCrunch has attained a copy of the lawsuit that alleges that Rankwave misused Facebook data outside of the apps where it was collected, purposefully delayed responding to a cease-and-desist order, claimed it didn’t violate Facebook policy, lied about not using its apps since 2018 when they were accessed in April 2019, and then refused to comply with a mandatory audit of its data practices. Facebook Platform data is not supposed to be repurposed for other business goals, only for the developer to improve their app’s user experience.

“By filing the lawsuit, we are sending a message to developers that Facebook is serious about enforcing our policies, including requiring developers to cooperate with us during an investigation” Facebook’s director of platform enforcement and litigation Jessica Romero wrote. Facebook tells TechCrunch that “To date Rankwave has not participated in our investigation and we are trying to get more info from them to determine if there was any misuse of Pages data.” We’ve reached out to Rankwave for its response.

Cambridge Analytic-ish

Facebook’s lawsuit details that “Rankwave used the Facebook data associated with Rankwave’s apps to create and sell advertising and marketing analytics and models — which violated Facebook’s policies and terms” and that it “failed to comply with Facebook’s requests for proof of Rankwave’s compliance with Facebook policies, including an audit.” Rankwave apparently accessed data from over thirty apps, including those created by its clients.

Specifically, Facebook cites that its “Platform Policies largely restrict Developers from using Facebook data outside of the environment of the app, for any purpose other than enhancing the app users’ experience on the app.” But Rankwave allegedly used Facebook data outside those apps.

Rankwave describes how it extracts contact info and ad targeting data from Facebook data

Facebook’s suit claims that “Rankwave’s B2B apps were installed and used by businesses to track and analyze activity on their Facebook Pages . . . Rankwave operated a consumer app called the ‘Rankwave App.’ This consumer app was designed to measure the app user’s popularity on Facebook by analyzing the level of interaction that other users had with the app user’s Facebook posts. On its website, Rankwave claimed that this app calculated a user’s ‘Social influence score’ by ‘evaluating your social activities’ and receiving ‘responses from your friends.’”

TechCrunch has found that Rankwave still offers an Android app that asks for you to login with Facebook so it can assess the popularity of your posts and give you a “Social Influencer Score”. Until 2015 when Facebook tightened its policies, this kind of app could ingest not only a user’s own data but that about their Facebook friends. As with Cambridge Analytica, this likely massively compounded Rankwave’s total data access.

Rankwave’s Android app asks for users’ Facebook data in exchange for providing them a Social Influencer Score

Facebook Delays Coming After Rankwave

Founded in 2012 by Sungwha Shim, Rankwave came into Facebook’s crosshairs in June 2018 after it was sold to a Korean entertainment company in May 2017. Facebook assesses that the value of its data at the time of the buyout was $9.8 million.

Worryingly, Facebook didn’t reach out to Rankwave until January 2019 for information proving it complied with the social network’s policies. After receiving no response, Facebook issued a cease-and-desist order in February, which Rankwave replied to seeking more time because it’s CTO had resigned, which Facebook calls “false representations”. Later that month, Rankwave denied violating Facebook’s policies but refused to provide proof. Facebook gave it more time to provide proof, but Rankwave didn’t respond. Facebook has now shut down Rankwave’s apps.

Rankwave claims to be able to extract a wide array of ad targeting data from Facebook data

Now Facebook is seeking money to cover the $9.8 million value of the data, additional monetary damages and legal fees, plus injunctive relief restraining Rankwave from accessing the Facebook Platform, requiring it to comply with Facebook’s audit, requiring that it delete all Facebook data.

The fact that Rankwave was openly promoting these services that blatantly violate Facebook’s policies casts further doubt on how the social network was policing its platform. And the six month delay between Facebook identifying a potential issue with Rankwave and it even reaching out for information, plus another several months before it blocked Rankwave’s app shows a failure to move swiftly to enforce its policies. These blunders might explain why Facebook buried the news by announcing it on a Friday afternoon when many reporters and readers have already signed off for the weekend.

For now there’s no evidence of wholesale transfer of Rankwave’s data to other parties or its misuse for especially nefarious purposes like influencing an election as with Cambridge Analytica. The lawsuit merely alleges data was wrongly harnessed to make money, which may not spur the same level of backlash. But the case further proves that Facebook was too busy growing itself thanks to the platform to properly safeguard it against abuse.

You can learn more about Rankwave’s analytics practices from this 2014 presentation.


Source: The Tech Crunch

Read More

UK parliament calls for antitrust, data abuse probe of Facebook

Posted by on Feb 18, 2019 in Advertising Tech, app developers, Artificial Intelligence, ashkan soltani, business model, Cambridge Analytica, competition law, data protection law, DCMS committee, election law, Europe, Facebook, Federal Trade Commission, GSR, information commissioner's office, Mark Zuckerberg, Mike Schroepfer, Moscow, Policy, Privacy, russia, Security, Social, Social Media, social media platforms, United Kingdom, United States | 0 comments

A final report by a British parliamentary committee which spent months last year investigating online political disinformation makes very uncomfortable reading for Facebook — with the company singled out for “disingenuous” and “bad faith” responses to democratic concerns about the misuse of people’s data.

In the report, published today, the committee has also called for Facebook’s use of user data to be investigated by the UK’s data watchdog.

In an evidence session to the committee late last year, the Information Commissioner’s Office (ICO) suggested Facebook needs to change its business model — warning the company risks burning user trust for good.

Last summer the ICO also called for an ethical pause of social media ads for election campaigning, warning of the risk of developing “a system of voter surveillance by default”.

Interrogating the distribution of ‘fake news’

The UK parliamentary enquiry looked into both Facebook’s own use of personal data to further its business interests, such as by providing access to users’ data to developers and advertisers in order to increase revenue and/or usage of its own platform; and examined what Facebook claimed as ‘abuse’ of its platform by the disgraced (and now defunct) political data company Cambridge Analytica — which in 2014 paid a developer with access to Facebook’s developer platform to extract information on millions of Facebook users in build voter profiles to try to influence elections.

The committee’s conclusion about Facebook’s business is a damning one with the company accused of operating a business model that’s predicated on selling abusive access to people’s data.

Far from Facebook acting against “sketchy” or “abusive” apps, of which action it has produced no evidence at all, it, in fact, worked with such apps as an intrinsic part of its business model,” the committee argues. This explains why it recruited the people who created them, such as Joseph Chancellor [the co-founder of GSR, the developer which sold Facebook user data to Cambridge Analytica]. Nothing in Facebook’s actions supports the statements of Mark Zuckerberg who, we believe, lapsed into “PR crisis mode”, when its real business model was exposed.

“This is just one example of the bad faith which we believe justifies governments holding a business such as Facebook at arms’ length. It seems clear to us that Facebook acts only when serious breaches become public. This is what happened in 2015 and 2018.”

“We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that ‘we’ve never sold anyone’s data” is simply untrue’,” the committee also concludes.

We’ve reached out to Facebook for comment on the committee’s report. Update: Facebook said it rejects all claims it breached data protection and competition laws.

In a statement attributed to UK public policy manager, Karim Palant, the company told us:

We share the Committee’s concerns about false news and election integrity and are pleased to have made a significant contribution to their investigation over the past 18 months, answering more than 700 questions and with four of our most senior executives giving evidence.

We are open to meaningful regulation and support the committee’s recommendation for electoral law reform. But we’re not waiting. We have already made substantial changes so that every political ad on Facebook has to be authorised, state who is paying for it and then is stored in a searchable archive for 7 years. No other channel for political advertising is as transparent and offers the tools that we do.

We also support effective privacy legislation that holds companies to high standards in their use of data and transparency for users.

While we still have more to do, we are not the same company we were a year ago. We have tripled the size of the team working to detect and protect users from bad content to 30,000 people and invested heavily in machine learning, artificial intelligence and computer vision technology to help prevent this type of abuse.

Last fall Facebook was issued the maximum possible fine under relevant UK data protection law for failing to safeguard user data from Cambridge Analytica saga. Although it is appealing the ICO’s penalty, claiming there’s no evidence UK users’ data got misused.

During the course of a multi-month enquiry last year investigating disinformation and fake news, the Digital, Culture, Media and Sport (DCMS) committee heard from 73 witnesses in 23 oral evidence sessions, as well as taking in 170 written submissions. In all the committee says it posed more than 4,350 questions.

Its wide-ranging, 110-page report makes detailed observations on a number of technologies and business practices across the social media, adtech and strategic communications space, and culminates in a long list of recommendations for policymakers and regulators — reiterating its call for tech platforms to be made legally liable for content.

Among the report’s main recommendations are:

  • clear legal liabilities for tech companies to act against “harmful or illegal content”, with the committee calling for a compulsory Code of Ethics overseen by a independent regulatory with statutory powers to obtain information from companies; instigate legal proceedings and issue (“large”) fines for non-compliance
  • privacy law protections to cover inferred data so that models used to make inferences about individuals are clearly regulated under UK data protection rules
  • a levy on tech companies operating in the UK to support enhanced regulation of such platforms
  • a call for the ICO to investigate Facebook’s platform practices and use of user data
  • a call for the Competition Markets Authority to comprehensively “audit” the online advertising ecosystem, and also to investigate whether Facebook specifically has engaged in anti-competitive practices
  • changes to UK election law to take account of digital campaigning, including “absolute transparency of online political campaigning” — including “full disclosure of the targeting used” — and more powers for the Electoral Commission
  • a call for a government review of covert digital influence campaigns by foreign actors (plus a review of legislation in the area to consider if it’s adequate) — including the committee urging the government to launch independent investigations of recent past elections to examine “foreign influence, disinformation, funding, voter manipulation, and the sharing of data, so that appropriate changes to the law can be made and lessons can be learnt for future elections and referenda”
  • a requirement on social media platforms to develop tools to distinguish between “quality journalism” and low quality content sources, and/or work with existing providers to make such services available to users

Among the areas the committee’s report covers off with detailed commentary are data use and targeting; advertising and political campaigning — including foreign influence; and digital literacy.

It argues that regulation is urgently needed to restore democratic accountability and “make sure the people stay in charge of the machines”.

“Protecting our data helps us secure the past, but protecting inferences and uses of Artificial Intelligence (AI) is what we will need to protect our future,” the committee warns.

Ministers are due to produce a White Paper on social media safety regulation this winter and the committee writes that it hopes its recommendations will inform government thinking.

“Much has been said about the coarsening of public debate, but when these factors are brought to bear directly in election campaigns then the very fabric of our democracy is threatened,” says the committee. “This situation is unlikely to change. What does need to change is the enforcement of greater transparency in the digital sphere, to ensure that we know the source of what we are reading, who has paid for it and why the information has been sent to us. We need to understand how the big tech companies work and what happens to our data.”

The report calls for tech companies to be regulated as a new category, “not necessarily either a ‘platform’ or a ‘publisher”, but which legally tightens their liability for harmful content published on their platforms.

Last month another UK parliamentary committee also urged the government to place a legal ‘duty of care’ on platforms to protect users under the age of 18. The government said then that it has not ruled out doing so.

We’ve reached out to the DCMS for a response to the latest committee report. Update: A department spokesperson told us:

The Government’s forthcoming White Paper on Online Harms will set out a new framework for ensuring disinformation is tackled effectively, while respecting freedom of expression and promoting innovation.

This week the Culture Secretary will travel to the United States to meet with tech giants including Google, Facebook, Twitter and Apple to discuss many of these issues.

We welcome this report’s contribution towards our work to tackle the increasing threat of disinformation and to make the UK the safest place to be online. We will respond in due course.

“Digital gangsters”

Competition concerns are also raised several times by the committee.

“Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law,” the DCMS committee writes, going on to urge the government to investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of its business practices towards other developers “to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail”. 

“The big tech companies must not be allowed to expand exponentially, without constraint or proper regulatory oversight,” it adds.

The committee suggests existing legal tools are up to the task of reining in platform power, citing privacy laws, data protection legislation, antitrust and competition law — and calling for a “comprehensive audit” of the social media advertising market by the UK’s Competition and Markets Authority, and a specific antitrust probe of Facebook’s business practices.

“If companies become monopolies they can be broken up, in whatever sector,” the committee points out. “Facebook’s handling of personal data, and its use for political campaigns, are prime and legitimate areas for inspection by regulators, and it should not be able to evade all editorial responsibility for the content shared by its users across its platforms.”

The social networking giant was the recipient of many awkward queries during the course of the committee’s enquiry but it refused repeated requests for its founder Mark Zuckerberg to testify — sending a number of lesser staffers in his stead.

That decision continues to be seized upon by the committee as evidence of a lack of democratic accountability. It also accuses Facebook of having an intentionally “opaque management structure”.

“By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world,” the committee writes.

“The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which—unsurprisingly—failed to address all of our questions. We are left in no doubt that this strategy was deliberate.”

It doubles down on the accusation that Facebook sought to deliberately mislead its enquiry — pointing to incorrect and/or inadequate responses from staffers who did testify.

“We are left with the impression that either [policy VP] Simon Milner and [CTO] Mike Schroepfer deliberately misled the Committee or they were deliberately not briefed by senior executives at Facebook about the extent of Russian interference in foreign elections,” it suggests.

In an unusual move late last year the committee used rare parliamentary powers to seize a cache of documents related to an active US lawsuit against Facebook filed by an app developer called Six4Three.

The cache of documents is referenced extensively in the final report, and appears to have fuelled antitrust concerns, with the committee arguing that the evidence obtained from the internal company documents “indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers… of that data, thereby causing them to lose their business”.

“It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission [privacy] settlement,” the committee also argues, citing evidence from the former chief technologist of the FTC, Ashkan Soltani .

On Soltani’s evidence, it writes:

Ashkan Soltani rejected [Facebook’s] claim, saying that up until 2012, platform controls did not exist, and privacy controls did not apply to apps. So even if a user set their profile to private, installed apps would still be able to access information. After 2012, Facebook added platform controls and made privacy controls applicable to apps. However, there were ‘whitelisted’ apps that could still access user data without permission and which, according to Ashkan Soltani, could access friends’ data for nearly a decade before that time. Apps were able to circumvent users’ privacy of platform settings and access friends’ information, even when the user disabled the Platform. This was an example of Facebook’s business model driving privacy violations.

While Facebook is singled out for the most eviscerating criticism in the report (and targeted for specific investigations), the committee’s long list of recommendations are addressed at social media businesses and online advertisers generally.

It also calls for far more transparency from platforms, writing that: “Social media companies need to be more transparent about their own sites, and how they work. Rather than hiding behind complex agreements, they should be informing users of how their sites work, including curation functions and the way in which algorithms are used to prioritise certain stories, news and videos, depending on each user’s profile. The more people know how the sites work, and how the sites use individuals’ data, the more informed we shall all be, which in turn will make choices about the use and privacy of sites easier to make.”

The committee also urges a raft of updates to UK election law — branding it “not fit for purpose” in the digital era.

Its interim report, published last summer, made many of the same recommendations.

Russian interest

But despite pressing the government for urgent action there was only a cool response from ministers then, with the government remaining tied up trying to shape a response to the 2016 Brexit referendum vote which split the country (with social media’s election-law-deforming help). Instead it opted for a ‘wait and see‘ approach.

The government accepted just three of the preliminary report’s forty-two recommendations outright, and fully rejected four.

Nonetheless, the committee has doubled down on its preliminary conclusions, reiterating earlier recommendations and pushing the government once again to act.

It cites fresh evidence, including from additional testimony, as well as pointing to other reports (such as the recently published Cairncross Review) which it argues back up some of the conclusions reached. 

“Our inquiry over the last year has identified three big threats to our society. The challenge for the year ahead is to start to fix them; we cannot delay any longer,” writes Damian Collins MP and chair of the DCMS Committee, in a statement. “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use every day. Much of this is directed from agencies working in foreign countries, including Russia.

“The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights. Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers who rely on this platform to reach their customers.”

“These are issues that the major tech companies are well aware of, yet continually fail to address. The guiding principle of the ‘move fast and break things’ culture often seems to be that it is better to apologise than ask permission. We need a radical shift in the balance of power between the platforms and the people,” he added.

“The age of inadequate self-regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator.”

The committee says it expects the government to respond to its recommendations within two months — noting rather dryly: “We hope that this will be much more comprehensive, practical, and constructive than their response to the Interim Report, published in October 2018. Several of our recommendations were not substantively answered and there is now an urgent need for the Government to respond to them.”

It also makes a point of including an analysis of Internet traffic to the government’s own response to its preliminary report last year — in which it highlights a “high proportion” of online visitors hailing from Russian cities including Moscow and Saint Petersburg…

Source: Web and publications unit, House of Commons

“This itself demonstrates the very clear interest from Russia in what we have had to say about their activities in overseas political campaigns,” the committee remarks, criticizing the government response to its preliminary report for claiming there’s no evidence of “successful” Russian interference in UK elections and democratic processes.

“It is surely a sufficient matter of concern that the Government has acknowledged that interference has occurred, irrespective of the lack of evidence of impact. The Government should be conducting analysis to understand the extent of Russian targeting of voters during elections,” it adds.

Three senior managers knew

Another interesting tidbit from the report is confirmation that the ICO has shared the names of three “senior managers” at Facebook who knew about the Cambridge Analytica data breach prior to the first press report in December 2015 — which is the date Facebook has repeatedly told the committee was when it first learnt of the breach, contradicting what the ICO found via its own investigations.

The committee’s report does not disclose the names of the three senior managers — saying the ICO has asked the names to remain confidential (we’ve reached out to the ICO to ask why it is not making this information public) — and implies the execs did not relay the information to Zuckerberg.

The committee dubs this as an example of “a profound failure” of internal governance, also branding it as evidence of “fundamental weakness” in how Facebook manages its responsibilities to users.

Here’s the committee’s account of that detail:

We were keen to know when and which people working at Facebook first knew about the GSR/Cambridge Analytica breach. The ICO confirmed, in correspondence with the Committee, that three “senior managers” were involved in email exchanges earlier in 2015 concerning the GSR breach before December 2015, when it was first reported by The Guardian. At the request of the ICO, we have agreed to keep the names confidential, but it would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case.

The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.

This report was updated with comment from Facebook and the UK government


Source: The Tech Crunch

Read More

Tech giants offer empty apologies because users can’t quit

Posted by on Nov 25, 2018 in Amazon, Apple, Apps, Cambridge Analytica, Drama, Elliot Schrage, Facebook, Facebook Policy, facebook privacy, GDPR, Google, Government, Mark Zuckerberg, Microsoft, Mobile, Policy, Privacy, project maven, Security, Social, Talent, TC | 0 comments

A true apology consists of a sincere acknowledgement of wrong-doing, a show of empathic remorse for why you wronged and the harm it caused, and a promise of restitution by improving ones actions to make things right. Without the follow-through, saying sorry isn’t an apology, it’s a hollow ploy for forgiveness.

That’s the kind of “sorry” we’re getting from tech giants — an attempt to quell bad PR and placate the afflicted, often without the systemic change necessary to prevent repeated problems. Sometimes it’s delivered in a blog post. Sometimes it’s in an executive apology tour of media interviews. But rarely is it in the form of change to the underlying structures of a business that caused the issue.

Intractable Revenue

Unfortunately, tech company business models often conflict with the way we wish they would act. We want more privacy but they thrive on targeting and personalization data. We want control of our attention but they subsist on stealing as much of it as possible with distraction while showing us ads. We want safe, ethically built devices that don’t spy on us but they make their margins by manufacturing them wherever’s cheap with questionable standards of labor and oversight. We want groundbreaking technologies to be responsibly applied, but juicy government contracts and the allure of China’s enormous population compromise their morals. And we want to stick to what we need and what’s best for us, but they monetize our craving for the latest status symbol or content through planned obsolescence and locking us into their platforms.

The result is that even if their leaders earnestly wanted to impart meaningful change to provide restitution for their wrongs, their hands are tied by entrenched business models and the short-term focus of the quarterly earnings cycle. They apologize and go right back to problematic behavior. The Washington Post recently chronicled a dozen times Facebook CEO Mark Zuckerberg has apologized, yet the social network keeps experiencing fiasco after fiasco. Tech giants won’t improve enough on their own.

Addiction To Utility

The threat of us abandoning ship should theoretically hold the captains in line. But tech giants have evolved into fundamental utilities that many have a hard time imagining living without. How would you connect with friends? Find what you needed? Get work done? Spend your time? What hardware or software would you cuddle up with in the moments you feel lonely? We live our lives through tech, have become addicted to its utility, and fear the withdrawal.

If there were principled alternatives to switch to, perhaps we could hold the giants accountable. But the scalability, network effects, and aggregation of supply by distributors has led to near monopolies in these core utilities. The second-place solution is often distant. What’s the next best social network that serves as an identity and login platform that isn’t owned by Facebook? The next best premium mobile and PC maker behind Apple? The next best mobile operating system for the developing world beyond Google’s Android? The next best ecommerce hub that’s not Amazon? The next best search engine? Photo feed? Web hosting service? Global chat app? Spreadsheet?

Facebook is still growing in the US & Canada despite the backlash, proving that tech users aren’t voting with their feet. And if not for a calculation methodology change, it would have added 1 million users in Europe this quarter too.

One of the few tech backlashes that led to real flight was #DeleteUber. Workplace discrimination, shady business protocols, exploitative pricing and more combined to spur the movement to ditch the ridehailing app. But what was different here is that US Uber users did have a principled alternative to switch to without much hassle: Lyft. The result was that “Lyft benefitted tremendously from Uber’s troubles in 2018” eMarketer’s forecasting director Shelleen Shum told the USA Today in May. Uber missed eMarketer’s projections while Lyft exceeded them, narrowing the gap between the car services. And meanwhile, Uber’s CEO stepped down as it tried to overhaul its internal policies.

This is why we need regulation that promotes competition by preventing massive mergers and giving users the right to interoperable data portability so they can easily switch away from companies that treat them poorly

But in the absence of viable alternatives to the giants, leaving these mainstays is inconvenient. After all, they’re the ones that made us practically allergic to friction. Even after massive scandals, data breaches, toxic cultures, and unfair practices, we largely stick with them to avoid the uncertainty of life without them. Even Facebook added 1 million monthly users in the US and Canada last quarter despite seemingly every possible source of unrest. Tech users are not voting with their feet. We’ve proven we can harbor ill will towards the giants while begrudgingly buying and using their products. Our leverage to improve their behavior is vastly weakened by our loyalty.

Inadequate Oversight

Regulators have failed to adequately step up either. This year’s congressional hearings about Facebook and social media often devolved into inane and uninformed questioning like how does Facebook earn money if its doesn’t charge? “Senator, we run ads” Facebook CEO Mark Zuckerberg said with a smirk. Other times, politicians were so intent on scoring partisan points by grandstanding or advancing conspiracy theories about bias that they were unable to make any real progress. A recent survey commissioned by Axios found that “In the past year, there has been a 15-point spike in the number of people who fear the federal government won’t do enough to regulate big tech companies — with 55% now sharing this concern.”

When regulators do step in, their attempts can backfire. GDPR was supposed to help tamp down on the dominance of Google and Facebook by limiting how they could collect user data and making them more transparent. But the high cost of compliance simply hindered smaller players or drove them out of the market while the giants had ample cash to spend on jumping through government hoops. Google actually gained ad tech market share and Facebook saw the littlest loss while smaller ad tech firms lost 20 or 30 percent of their business.

Europe’s GDPR privacy regulations backfired, reinforcing Google and Facebook’s dominance. Chart via Ghostery, Cliqz, and WhoTracksMe.

Even the Honest Ads act, which was designed to bring political campaign transparency to internet platforms following election interference in 2016, has yet to be passed even despite support from Facebook and Twitter. There’s hasn’t been meaningful discussion of blocking social networks from acquiring their competitors in the future, let alone actually breaking Instagram and WhatsApp off of Facebook. Governments like the U.K. that just forcibly seized documents related to Facebook’s machinations surrounding the Cambridge Analytica debacle provide some indication of willpower. But clumsy regulation could deepen the moats of the incumbents, and prevent disruptors from gaining a foothold. We can’t depend on regulators to sufficiently protect us from tech giants right now.

Our Hope On The Inside

The best bet for change will come from the rank and file of these monolithic companies. With the war for talent raging, rock star employees able to have huge impact on products, and compensation costs to keep them around rising, tech giants are vulnerable to the opinions of their own staff. It’s simply too expensive and disjointing to have to recruit new high-skilled workers to replace those that flee.

Google declined to renew a contract with the government after 4000 employees petitioned and a few resigned over Project Maven’s artificial intelligence being used to target lethal drone strikes. Change can even flow across company lines. Many tech giants including Facebook and Airbnb have removed their forced arbitration rules for harassment disputes after Google did the same in response to 20,000 of its employees walking out in protest.

Thousands of Google employees protested the company’s handling of sexual harassment and misconduct allegations on Nov. 1.

Facebook is desperately pushing an internal communications campaign to reassure staffers it’s improving in the wake of damning press reports from the New York Times and others. TechCrunch published an internal memo from Facebook’s outgoing VP of communications Elliot Schrage in which he took the blame for recent issues, encouraged employees to avoid finger-pointing, and COO Sheryl Sandberg tried to reassure employees that “I know this has been a distraction at a time when you’re all working hard to close out the year — and I am sorry.” These internal apologizes could come with much more contrition and real change than those paraded for the public.

And so after years of us relying on these tech workers to build the product we use every day, we must now rely that will save us from them. It’s a weighty responsibility to move their talents where the impact is positive, or commit to standing up against the business imperatives of their employers. We as the public and media must in turn celebrate when they do what’s right for society, even when it reduces value for shareholders. If apps abuse us or unduly rob us of our attention, we need to stay off of them.

And we must accept that shaping the future for the collective good may be inconvenient for the individual. There’s an oppprtunity here not just to complain or wish, but to build a social movement that holds tech giants accountable for delivering the change they’ve promised over and over.

( function() {
var func = function() {
var iframe = document.getElementById(‘wpcom-iframe-89e55facba7f8c8b1e754594650bf342’)
if ( iframe ) {
iframe.onload = function() {
iframe.contentWindow.postMessage( {
‘msg_type’: ‘poll_size’,
‘frame_id’: ‘wpcom-iframe-89e55facba7f8c8b1e754594650bf342’
}, “https://tcprotectedembed.com” );
}
}

// Autosize iframe
var funcSizeResponse = function( e ) {

var origin = document.createElement( ‘a’ );
origin.href = e.origin;

// Verify message origin
if ( ‘tcprotectedembed.com’ !== origin.host )
return;

// Verify message is in a format we expect
if ( ‘object’ !== typeof e.data || undefined === e.data.msg_type )
return;

switch ( e.data.msg_type ) {
case ‘poll_size:response’:
var iframe = document.getElementById( e.data._request.frame_id );

if ( iframe && ” === iframe.width )
iframe.width = ‘100%’;
if ( iframe && ” === iframe.height )
iframe.height = parseInt( e.data.height );

return;
default:
return;
}
}

if ( ‘function’ === typeof window.addEventListener ) {
window.addEventListener( ‘message’, funcSizeResponse, false );
} else if ( ‘function’ === typeof window.attachEvent ) {
window.attachEvent( ‘onmessage’, funcSizeResponse );
}
}
if (document.readyState === ‘complete’) { func.apply(); /* compat for infinite scroll */ }
else if ( document.addEventListener ) { document.addEventListener( ‘DOMContentLoaded’, func, false ); }
else if ( document.attachEvent ) { document.attachEvent( ‘onreadystatechange’, func ); }
} )();

For more on this topic:


Source: The Tech Crunch

Read More

UK parliament seizes cache of internal Facebook documents to further privacy probe

Posted by on Nov 25, 2018 in Cambridge Analytica, Damian Collins, data misuse, data protection, DCMS committee, Elizabeth Denham, Europe, European Union, Facebook, fake news, Lawsuit, Mark Zuckerberg, Mike Schroepfer, online disinformation, Privacy, Richard Allan, Security, Social, Social Media, social media regulation, United Kingdom | 0 comments

Facebook founder Mark Zuckerberg may yet regret underestimating a UK parliamentary committee that’s been investigating the democracy-denting impact of online disinformation for the best part of this year — and whose repeat requests for facetime he’s just as repeatedly snubbed.

In the latest high gear change, reported in yesterday’s Observer, the committee has used parliamentary powers to seize a cache of documents pertaining to a US lawsuit to further its attempt to hold Facebook to account for misuse of user data.

Facebook’s oversight — or rather lack of it — where user data is concerned has been a major focus for the committee, as its enquiry into disinformation and data misuse has unfolded and scaled over the course of this year, ballooning in scope and visibility since the Cambridge Analytica story blew up into a global scandal this April.

The internal documents now in the committee’s possession are alleged to contain significant revelations about decisions made by Facebook senior management vis-a-vis data and privacy controls — including confidential emails between senior executives and correspondence with Zuckerberg himself.

This has been a key line of enquiry for parliamentarians. And an equally frustrating one — with committee members accusing Facebook of being deliberately misleading and concealing key details from it.

The seized files pertain to a US lawsuit that predates mainstream publicity around political misuse of Facebook data, with the suit filed in 2015, by a US startup called Six4Three, after Facebook removed developer access to friend data. (As we’ve previously reported Facebook was actually being warned about data risks related to its app permissions as far back as 2011 — yet it didn’t full shut down the friends data API until May 2015.)

The core complaint is an allegation that Facebook enticed developers to create apps for its platform by implying they would get long-term access to user data in return. So by later cutting data access the claim is that Facebook was effectively defrauding developers.

Since lodging the complaint, the plaintiffs have seized on the Cambridge Analytica saga to try to bolster their case.

And in a legal motion filed in May Six4Three’s lawyers claimed evidence they had uncovered demonstrated that “the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones”.

The startup used legal powers to obtain the cache of documents — which remain under seal on order of a California court. But the UK parliament used its own powers to swoop in and seize the files from the founder of Six4Three during a business trip to London when he came under the jurisdiction of UK law, compelling him to hand them over.

According to the Observer, parliament sent a serjeant at arms to the founder’s hotel — giving him a final warning and a two-hour deadline to comply with its order.

“When the software firm founder failed to do so, it’s understood he was escorted to parliament. He was told he risked fines and even imprisonment if he didn’t hand over the documents,” it adds, apparently revealing how Facebook lost control over some more data (albeit, its own this time).

In comments to the newspaper yesterday, DCMS committee chair Damian Collins said: “We are in uncharted territory. This is an unprecedented move but it’s an unprecedented situation. We’ve failed to get answers from Facebook and we believe the documents contain information of very high public interest.”

Collins later tweeted the Observer’s report on the seizure, teasing “more next week” — likely a reference to the grand committee hearing in parliament already scheduled for November 27.

But it could also be a hint the committee intends to reveal and/or make use of information locked up in the documents, as it puts questions to Facebook’s VP of policy solutions…

That said, the documents are subject to the Californian superior court’s seal order, so — as the Observer points out — cannot be shared or made public without risk of being found in contempt of court.

A spokesperson for Facebook made the same point, telling the newspaper: “The materials obtained by the DCMS committee are subject to a protective order of the San Mateo Superior Court restricting their disclosure. We have asked the DCMS committee to refrain from reviewing them and to return them to counsel or to Facebook. We have no further comment.”

Facebook’s spokesperson added that Six4Three’s “claims have no merit”, further asserting: “We will continue to defend ourselves vigorously.”

Earlier on Sunday, Facebook sent a response to Collins, which Guardian reporter Carole Cadwalladr posted soon after.

With the response, Facebook seems to be using the same tactics which were responsible for the latest round of criticism against the company — deny, delay, and dissemble. 

And, well, the irony of Facebook asking for its data to remain private also shouldn’t be lost on anyone at this point…

Another irony: In July, the Guardian reported that as part of Facebook’s defence against Six4Three’s suit the company had argued in court that it is a publisher — seeking to have what it couched as ‘editorial decisions’ about data access protected by the US’ first amendment.

Which is — to put it mildly — quite the contradiction, given Facebook’s long-standing public characterization of its business as just a distribution platform, never a media company.

So expect plenty of fireworks at next week’s public hearing as parliamentarians once again question Facebook over its various contradictory claims.

It’s also possible the committee will have been sent an internal email distribution list by then, detailing who at Facebook knew about the Cambridge Analytica breach in the earliest instance.

This list was obtained by the UK’s data watchdog, over the course of its own investigation into the data misuse saga. And earlier this month information commissioner Elizabeth Denham confirmed the ICO has the list and said it would pass it to the committee.

The accountability net does look to be closing in on Facebook management.

Even as Facebook continues to deny international parliaments any face-time with its founder and CEO (the EU parliament remains the sole exception).

Last week the company refused to even have Zuckerberg do a video call to take the committee’s questions — offering its VP of policy solutions, Richard Allan, to go before what’s now a grand committee comprised of representatives from seven international parliaments instead.

The grand committee hearing will take place in London on Tuesday morning, British time — followed by a press conference in which parliamentarians representing Facebook users from across the world will sign a set of ‘International Principles for the Law Governing the Internet’, making “a declaration on future action”.

So it’s also ‘watch this space’ where international social media regulation is concerned.

As noted above, Allan is just the latest stand-in for Zuckerberg. Back in April the DCMS committee spend the best part of five hours trying to extract answers from Facebook CTO, Mike Schroepfer.

“You are doing your best but the buck doesn’t stop with you does it? Where does the buck stop?” one committee member asked him then.

“It stops with Mark,” replied Schroepfer.

But Zuckerberg definitely won’t be stopping by on Tuesday.


Source: The Tech Crunch

Read More

Facebook policy VP, Richard Allan, to face the international ‘fake news’ grilling that Zuckerberg won’t

Posted by on Nov 23, 2018 in Cambridge Analytica, data breach, digital media, Elizabeth Denham, Europe, Facebook, fake news, London, Mark Zuckerberg, online disinformation, Paul-Olivier Dehaye, Policy, Privacy, Richard Allan, Security, Social, Social Media, social network | 0 comments

An unprecedented international grand committee comprised of 22 representatives from seven parliaments will meet in London next week to put questions to Facebook about the online fake news crisis and the social network’s own string of data misuse scandals.

But Facebook founder Mark Zuckerberg won’t be providing any answers. The company has repeatedly refused requests for him to answer parliamentarians’ questions.

Instead it’s sending a veteran EMEA policy guy, Richard Allan, now its London-based VP of policy solutions, to face a roomful of irate MPs.

Allan will give evidence next week to elected members from the parliaments of Argentina, Brazil, Canada, Ireland, Latvia, Singapore, along with members of the UK’s Digital, Culture, Media and Sport (DCMS) parliamentary committee.

At the last call the international initiative had a full eight parliaments behind it but it’s down to seven — with Australia being unable to attend on account of the travel involved in getting to London.

A spokeswoman for the DCMS committee confirmed Facebook declined its last request for Zuckerberg to give evidence, telling TechCrunch: “The Committee offered the opportunity for him to give evidence over video link, which was also refused. Facebook has offered Richard Allan, vice president of policy solutions, which the Committee has accepted.”

“The Committee still believes that Mark Zuckerberg is the appropriate person to answer important questions about data privacy, safety, security and sharing,” she added. “The recent New York Times investigation raises further questions about how recent data breaches were allegedly dealt with within Facebook, and when the senior leadership team became aware of the breaches and the spread of Russian disinformation.”

The DCMS committee has spearheaded the international effort to hold Facebook to account for its role in a string of major data scandals, joining forces with similarly concerned committees across the world, as part of an already wide-ranging enquiry into the democratic impacts of online disinformation that’s been keeping it busy for the best part of this year.

And especially busy since the Cambridge Analytica story blew up into a major global scandal this April, although Facebook’s 2018 run of bad news hasn’t stopped there…

The evidence session with Allan is scheduled to take place at 11.30am (GMT) on November 27 in Westminster. (It will also be streamed live on the UK’s parliament.tv website.)

Afterwards a press conference has been scheduled — during which DCMS says a representative from each of the seven parliaments will sign a set of ‘International Principles for the Law Governing the Internet’.

It bills this as “a declaration on future action from the parliaments involved” — suggesting the intent is to generate international momentum and consensus for regulating social media.

The DCMS’ preliminary report on the fake news crisis, which it put out this summer, called for urgent action from government on a number of fronts — including floating the idea of a levy on social media to defence democracy.

However UK ministers failed to leap into action, merely putting out a tepid ‘wait and see’ response. Marshalling international action appears to be DCMS’ alternative action plan.

At next week’s press conference, grand committee members will take questions following Allan’s evidence — so expect swift condemnation of any fresh equivocation, misdirection or question-dodging from Facebook (which has already been accused by DCMS members of a pattern of evasive behavior).

Last week’s NYT report also characterized the company’s strategy since 2016, vis-a-vis the fake news crisis, as ‘delay, deny, deflect’.

The grand committee will hear from other witnesses too, including the UK’s information commissioner Elizabeth Denham who was before the DCMS committee recently to report on a wide-ranging ecosystem investigation it instigated in the wake of the Cambridge Analytica scandal.

She told it then that Facebooks needs to take “much greater responsibility” for how its platform is being used, and warning that unless the company overhauls its privacy-hostile business model it risk burning user trust for good.

Also giving evidence next week: Deputy information commissioner Steve Wood; the former Prime Minister of St Kitts and Nevis, Rt Hon Dr Denzil L Douglas (on account of Cambridge Analytica/SCL Elections having done work in the region); and the co-founder of PersonalData.IO, Paul-Olivier Dehaye.

Dehaye has also given evidence to the committee before — detailing his experience of making Subject Access Requests to Facebook — and trying and failing to obtain all the data it holds on him.


Source: The Tech Crunch

Read More

It’s time for Facebook and Twitter to coordinate efforts on hate speech

Posted by on Sep 1, 2018 in Alex Jones, Cambridge Analytica, Facebook, Government, hate speech, infowars, Policy, Section 230, Social, Twitter, YouTube | 0 comments

Since the election of Donald Trump in 2016, there has been burgeoning awareness of the hate speech on social media platforms like Facebook and Twitter. While activists have pressured these companies to improve their content moderation, few groups (outside of the German government) have outright sued the platforms for their actions.

That’s because of a legal distinction between media publications and media platforms that has made solving hate speech online a vexing problem.

Take, for instance, an op-ed published in the New York Times calling for the slaughter of an entire minority group.  The Times would likely be sued for publishing hate speech, and the plaintiffs may well be victorious in their case. Yet, if that op-ed were published in a Facebook post, a suit against Facebook would likely fail.

The reason for this disparity? Section 230 of the Communications Decency Act (CDA), which provides platforms like Facebook with a broad shield from liability when a lawsuit turns on what its users post or share. The latest uproar against Alex Jones and Infowars has led many to call for the repeal of section 230 – but that may lead to government getting into the business of regulating speech online. Instead, platforms should step up to the plate and coordinate their policies so that hate speech will be considered hate speech regardless of whether Jones uses Facebook, Twitter or YouTube to propagate his hate. 

A primer on section 230 

Section 230 is considered a bedrock of freedom of speech on the internet. Passed in the mid-1990s, it is credited with freeing platforms like Facebook, Twitter, and YouTube from the risk of being sued for content their users upload, and therefore powering the exponential growth of these companies. If it weren’t for section 230, today’s social media giants would have long been bogged down with suits based on what their users post, with the resulting necessary pre-vetting of posts likely crippling these companies altogether. 

Instead, in the more than twenty years since its enactment, courts have consistently found section 230 to be a bar to suing tech companies for user-generated content they host. And it’s not only social media platforms that have benefited from section 230; sharing economy companies have used section 230 to defend themselves, with the likes of Airbnb arguing they’re not responsible for what a host posts on their site. Courts have even found section 230 broad enough to cover dating apps. When a man sued one for not verifying the age of an underage user, the court tossed out the lawsuit finding an app user’s misrepresentation of his age not to be the app’s responsibility because of section 230.

Private regulation of hate speech 

Of course, section 230 has not meant that hate speech online has gone unchecked. Platforms like Facebook, YouTube and Twitter all have their own extensive policies prohibiting users from posting hate speech. Social media companies have hired thousands of moderators to enforce these policies and to hold violating users accountable by suspending them or blocking their access altogether. But the recent debacle with Alex Jones and Infowars presents a case study on how these policies can be inconsistently applied.  

Jones has for years fabricated conspiracy theories, like the one claiming that the Sandy Hook school shooting was a hoax and that Democrats run a global child-sex trafficking ring. With thousands of followers on Facebook, Twitter, and YouTube, Jones’ hate speech has had real life consequences. From the brutal harassment of Sandy Hook parents to a gunman storming a pizza restaurant in D.C. to save kids from the restaurant’s nonexistent basement, his messages have had serious deleterious consequences for many. 

Alex Jones and Infowars were finally suspended from ten platforms by our count – with even Twitter falling in line and suspending him for a week after first dithering. But the varying and delayed responses exposed how different platforms handle the same speech.  

Inconsistent application of hate speech rules across platforms, compounded by recent controversies involving the spread of fake news and the contribution of social media to increased polarization, have led to calls to amend or repeal section 230. If the printed press and cable news can be held liable for propagating hate speech, the argument goes, then why should the same not be true online – especially when fully two-thirds of Americans now report getting at least some of their news from social media.  Amid the chorus of those calling for more regulation of tech companies, section 230 has become a consistent target. 

Should hate speech be regulated? 

But if you need convincing as to why the government is not best placed to regulate speech online, look no further than Congress’s own wording in section 230. The section enacted in the mid-90s states that online platforms “offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops” and “a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.”  

Section 230 goes on to declare that it is the “policy of the United States . . . to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet.”  Based on the above, section 230 offers the now infamous liability protection for online platforms.  

From the simple fact that most of what we see on our social media is dictated by algorithms over which we have no control, to the Cambridge Analytica scandal, to increased polarization because of the propagation of fake news on social media, one can quickly see how Congress’s words in 1996 read today as a catalogue of inaccurate predictions. Even Ron Wyden, one of the original drafters of section 230, himself admits today that drafters never expected an “individual endorsing (or denying) the extermination of millions of people, or attacking the victims of horrific crimes or the parents of murdered children” to be enabled through the protections offered by section 230.

It would be hard to argue that today’s Congress – having shown little understanding in recent hearings of how social media operates to begin with – is any more qualified at predicting the effects of regulating speech online twenty years from now.   

More importantly, the burden of complying with new regulations will definitely result in a significant barrier to entry for startups and therefore have the unintended consequence of entrenching incumbents. While Facebook, YouTube, and Twitter may have the resources and infrastructure to handle compliance with increased moderation or pre-vetting of posts that regulations might impose, smaller startups will be at a major disadvantage in keeping up with such a burden.

Last chance before regulation 

The answer has to lie with the online platforms themselves. Over the past two decades, they have amassed a wealth of experience in detecting and taking down hate speech. They have built up formidable teams with varied backgrounds to draft policies that take into account an ever-changing internet. Their profits have enabled them to hire away top talent, from government prosecutors to academics and human rights lawyers.  

These platforms also have been on a hiring spree in the last couple of years to ensure that their product policy teams – the ones that draft policies and oversee their enforcement – are more representative of society at large. Facebook proudly announced that its product policy team now includes “a former rape crisis counselor, an academic who has spent her career studying hate organizations . . . and a teacher.” Gone are the days when a bunch of engineers exclusively decided where to draw the lines. Big tech companies have been taking the drafting and enforcement of their policies ever more seriously.

What they now need to do is take the next step and start to coordinate policies so that those who wish to propagate hate speech can no longer game policies across platforms. Waiting for controversies like Infowars to become a full-fledged PR nightmare before taking concrete action will only increase calls for regulation. Proactively pooling resources when it comes to hate speech policies and establishing industry-wide standards will provide a defensible reason to resist direct government regulation.

The social media giants can also build public trust by helping startups get up to speed on the latest approaches to content moderation. While any industry consortium around coordinating hate speech is certain to be dominated by the largest tech companies, they can ensure that policies are easy to access and widely distributed.

Coordination between fierce competitors may sound counterintuitive. But the common problem of hate speech and the gaming of online platforms by those trying to propagate it call for an industry-wide response. Precedent exists for tech titans coordinating when faced with a common threat. Just last year, Facebook, Microsoft, Twitter, and YouTube formalized their “Global Internet Forum to Counter Terrorism” – a partnership to curb the threat of terrorist content online. Fighting hate speech is no less laudable a goal.

Self-regulation is an immense privilege. To the extent that big tech companies want to hold onto that privilege, they have a responsibility to coordinate the policies that underpin their regulation of speech and to enable startups and smaller tech companies to get access to these policies and enforcement mechanisms.


Source: The Tech Crunch

Read More

It’s official: Brexit campaign broke the law — with social media’s help

Posted by on Jul 17, 2018 in AggregateIQ, BeLeave, Brexit, Cambridge Analytica, election law, Electoral Commission, Europe, Government, Policy, Political Advertising, Privacy, Social, Social Media, Vote Leave | 0 comments

The UK’s Electoral Commission has published the results of a near nine-month-long investigation into Brexit referendum spending and has found that the official Vote Leave campaign broke the law by breaching election campaign spending limits.

Vote Leave broke the law including by channeling money to a Canadian data firm, AggregateIQ, to use for targeting political advertising on Facebook’s platform, via undeclared joint working with another Brexit campaign, BeLeave, it found.

Aggregate IQ remains the subject of a separate joint investigation by privacy watchdogs in Canada and British Columbia.

The Electoral Commission’s investigation found evidence that BeLeave spent more than £675,000 with AggregateIQ under a common arrangement with Vote Leave. Yet the two campaigns had failed to disclose on their referendum spending returns that they had a common plan.

As the designated lead leave campaign, Vote Leave had a £7M spending limit under UK law. But via its joint spending with BeLeave the Commission determined it actually spent £7,449,079 — exceeding the legal spending limit by almost half a million pounds.

The June 2016 referendum in the UK resulted in a narrow 52:48 majority for the UK to leave the European Union. Two years on from the vote, the government has yet to agree a coherent policy strategy to move forward in negotiations with the EU, leaving businesses to suck up ongoing uncertainty and society and citizens to remain riven and divided.

Meanwhile, Facebook — whose platform played a key role in distributing referendum messaging — booked revenue of around $40.7BN in 2017 alone, reporting a full year profit of almost $16BN.

Back in May, long-time leave supporter and MEP, Nigel Farage, told CEO Mark Zuckerberg to his face in the European Parliament that without “Facebook and other forms of social media there is no way that Brexit or Trump or the Italian elections could ever possibly have happened”.

The Electoral Commission’s investigation focused on funding and spending, and mainly concerned five payments made to Aggregate IQ in June 2016 — payments made for campaign services for the EU Referendum — by the three Brexit campaigns it investigated (the third being: Veterans for Britain).

Veterans for Britain’s spending return included a donation of £100,000 that was reported as a cash donation received and accepted on 20 May 2016. But the Commission found this was in fact a payment by Vote Leave to Aggregate IQ for services provided to Veterans for Britain in the final days of the EU Referendum campaign. The date was also incorrectly reported: It was actually paid by Vote Leave on 29 June 2016.

Despite the donation to a third Brexit campaign by the official Vote Leave campaign being for services provided by Aggregate IQ, which was also simultaneously providing services to Vote Leave, the Commission did not deem it to constitute joint working, writing: “[T]he evidence we have seen does not support the concern that the services were provided to Veterans for Britain as joint working with Vote Leave.”

It was, however, found to constitute an inaccurate donation report — another offense under the UK’s Political Parties, Elections and Referendums Act 2000.

The report details multiple issues with spending returns across the three campaigns. And the Commission has issued a series of fines to the three Brexit campaigns.

It has also referred two individuals — Vote Leave’s David Alan Halsall and BeLeave’s Darren Grimes — to the UK’s Metropolitan Police Service, which has the power to instigate a criminal investigation.

Early last year the Commission decided not to fully investigate Vote Leave’s spending but by October it says new information had emerged — which suggested “a pattern of action by Vote Leave” — so it revisited the assessment and reopened an investigation in November.

Its report also makes it clear that Vote Leave failed to co-operate with its investigation — including by failing to produce requested information and documents; by failing to provide representatives for interview; by ignoring deadlines to respond to formal investigation notices; and by objecting to the fact of the investigation, including suggesting it would judicially review the opening of the investigation.

Judging by the Commission’s account, Vote Leave seemingly did everything it could to try to thwart and delay the investigation — which is only reporting now, two years on from the Brexit vote and with mere months of negotiating time left before the end of the formal Article 50 exit notification process.

What’s crystal clear from this report is that following money and data trails takes time and painstaking investigation, which — given that, y’know, democracy is at stake — heavily bolsters the case for far more stringent regulations and transparency mechanisms to prevent powerful social media platforms from quietly absorbing politically motivated money and messaging without recognizing any responsibility to disclose the transactions, let alone carry out due diligence on who or what may be funding the political spending.

The political ad transparency measures that Facebook has announced so far come far too late for Brexit — or indeed, for the 2016 US presidential election when its platform carried and amplifiedKremlin funded divisive messaging which reached the eyeballs of hundreds of millions of US voters.

Last week the UK’s information commissioner, Elizabeth Denham, criticized Facebook for transparency and control failures relating to political ads on its platform, and also announced its intention to fine Facebook the maximum possible for breaches of UK data protection law relating to the Cambridge Analytica scandal, after it emerged that information on as many as 87 million Facebook users was extracted from its platform and passed to a controversial UK political consultancy without most people’s knowledge or consent.

She also published a series of policy recommendations around digital political campaigning — calling for an ethical pause on the use of personal data for political ad targeting, and warning that a troubling lack of transparency about how people’s data is being used risks undermining public trust in democracy

“Without a high level of transparency – and therefore trust amongst citizens that their data is being used appropriately – we are at risk of developing a system of voter surveillance by default,” she warned.

The Cambridge Analytica Facebook scandal is linked to the Brexit referendum via AggregateIQ — which was also a contractor for Cambridge Analytica, and also handled Facebook user information which the former company had improperly obtained, after paying a Cambridge University academic to use a quiz app to harvest people’s data and use it to create psychometric profiles for ad targeting.

The Electoral Commission says it was approached by Facebook during the Brexit campaign spending investigation with “some information about how Aggregate IQ used its services during the EU Referendum campaign”.

We’ve reached out to Facebook for comment on the report and will update this story with any response.

The Commission states that evidence from Facebook indicates that AggregateIQ used “identical target lists for Vote Leave and BeLeave ads”, although at least in one instance the BeLeave ads “were not run”.

It writes:

BeLeave’s ability to procure services from Aggregate IQ only resulted from the actions of Vote Leave, in providing those donations and arranging a separate donor for BeLeave. While BeLeave may have contributed its own design style and input, the services provided by Aggregate IQ to BeLeave used Vote Leave messaging, at the behest of BeLeave’s campaign director. It also appears to have had the benefit of Vote Leave data and/or data it obtained via online resources set up and provided to it by Vote Leave to target and distribute its campaign material. This is shown by evidence from Facebook that Aggregate IQ used identical target lists for Vote Leave and BeLeave ads, although the BeLeave ads were not run.

“We also asked for copies of the adverts Aggregate IQ placed for BeLeave, and for details of the reports he received from Aggregate IQ on their use. Mr Grimes replied to our questions,” it further notes in the report.

At the height of the referendum campaign — at a crucial moment when Vote Leave had reached its official spending limit — officials from the official leave campaign persuaded BeLeave’s only other donor, an individual called Anthony Clake, to allow it to funnel a donation from him directly to Aggregate IQ, who Vote Leave campaign director Dominic Cummins dubbed a bunch of “social media ninjas”.

The Commission writes:

On 11 June 2016 Mr Cummings wrote to Mr Clake saying that Vote Leave had all the money it could spend, and suggesting the following: “However, there is another organisation that could spend your money. Would you be willing to spend the 100k to some social media ninjas who could usefully spend it on behalf of this organisation? I am very confident it would be well spent in the final crucial 5 days. Obviously it would be entirely legal. (sic)”

Mr Clake asked about this organisation. Mr Cummings replied as follows: “the social media ninjas are based in canada – they are extremely good. You would send your money directly to them. the organisation that would legally register the donation is a permitted participant called BeLeave, a “young people’s organisation”. happy to talk it through on the phone though in principle nothing is required from you but to wire money to a bank account if you’re happy to take my word for it. (sic)

Mr Clake then emailed Mr Grimes to offer a donation to BeLeave. He specified that this donation would made “via the AIQ account.”

And while the Commission says it found evidence that Grimes and others from BeLeave had “significant input into the look and design of the BeLeave adverts produced by Aggregate IQ”, it also determined that Vote Leave messaging was “influential in their strategy and design” — hence its determination of a common plan between the two campaigns. Aggregate IQ was the vehicle used by Vote Leave to breech its campaign spending cap.

Providing examples of the collaboration it found between the two campaigns, the Commission quotes internal BeLeave correspondence — including an instruction from Grimes to: “Copy and paste lines from Vote Leave’s briefing room in a BeLeave voice”.

It writes:

On 15 June 2016 Mr Grimes told other BeLeave Board members and Aggregate IQ that BeLeave’s ads needed to be: “an effective way of pushing our more liberal and progressive message to an audience which is perhaps not as receptive to Vote Leave’s messaging.”

On 17 June 2016 Mr Grimes told other BeLeave Board members: “So as soon as we can go live. Advertising should be back on tomorrow and normal operating as of Sunday. I’d like to make sure we have loads of scheduled tweets and Facebook status. Post all of those blogs including Shahmirs [aka Shahmir Sami; who became a BeLeave whistleblower], use favstar to check out and repost our best performing tweets. Copy and paste lines from Vote Leave’s briefing room in a BeLeave voice”


Source: The Tech Crunch

Read More

UK’s Information Commissioner will fine Facebook the maximum £500K over Cambridge Analytica breach

Posted by on Jul 10, 2018 in Cambridge Analytica, Facebook, Privacy, Social, TC | 1 comment

Facebook continues to face fallout over the Cambridge Analytica scandal, which revealed how user data was stealthily obtained by way of quizzes and then appropriated for other purposes, such as targeted political advertising. Today, the U.K. Information Commissioner’s Office (ICO) announced that it would be issuing the social network with its maximum fine, £500,000 ($662,000) after it concluded that it “contravened the law” — specifically the 1998 Data Protection Act — “by failing to safeguard people’s information.”

The ICO is clear that Facebook effectively broke the law by failing to keep users data safe, when their systems allowed Dr Aleksandr Kogan, who developed an app, called “This is your digital life” on behalf of Cambridge Analytica, to scrape the data of up to 87 million Facebook users. This included accessing all of the friends data of the individual accounts that had engaged with Dr Kogan’s app.

The ICO’s inquiry first started in May 2017 in the wake of the Brexit vote and questions over how parties could have manipulated the outcome using targeted digital campaigns.

Damian Collins, the MP who is the chair of the Digital, Culture, Media and Sport Committee that has been undertaking the investigation, has as a result of this said that the DCMS will now demand more information from Facebook, including which other apps might have also been involved, or used in a similar way by others, as well as what potential links all of this activity might have had to Russia. He’s also gearing up to demand a full, independent investigation of the company, rather than the internal audit that Facebook so far has provided. A full statement from Collins is below.

The fine, and the follow-up questions that U.K. government officials are now asking, are a signal that Facebook — after months of grilling on both sides of the Atlantic amid a wider investigation — is not yet off the hook in the U.K. This will come as good news to those who watched the hearings (and non-hearings) in Washington, London and European Parliament and felt that Facebook and others walked away relatively unscathed. The reverberations are also being felt in other parts of the world. In Australia, a group earlier today announced that it was forming a class action lawsuit against Facebook for breaching data privacy as well. (Australia has also been conducting a probe into the scandal.)

The ICO also put forward three questions alongside its announcement of the fine, which it will now be seeking answers to from Facebook. In its own words:

  1. Who had access to the Facebook data scraped by Dr Kogan, or any data sets derived from it?
  2. Given Dr Kogan also worked on a project commissioned by the Russian Government through the University of St Petersburg, did anyone in Russia ever have access to this data or data sets derived from it?
  3. Did organisations who benefited from the scraped data fail to delete it when asked to by Facebook, and if so where is it now?

The DCMS committee has been conducting a wider investigation into disinformation and data use in political campaigns and it plans to publish an interim report on it later this month.

Collins’ full statement:

Given that the ICO is saying that Facebook broke the law, it is essential that we now know which other apps that ran on their platform may have scraped data in a similar way. This cannot by left to a secret internal investigation at Facebook. If other developers broke the law we have a right to know, and the users whose data may have been compromised in this way should be informed.

Facebook users will be rightly concerned that the company left their data far too vulnerable to being collected without their consent by developers working on behalf of companies like Cambridge Analytica. The number of Facebook users affected by this kind of data scraping may be far greater than has currently been acknowledged. Facebook should now make the results of their internal investigations known to the ICO, our committee and other relevant investigatory authorities.

Facebook state that they only knew about this data breach when it was first reported in the press in December 2015. The company has consistently failed to answer the questions from our committee as to who at Facebook was informed about it. They say that Mark Zuckerberg did not know about it until it was reported in the press this year. In which case, given that it concerns a breach of the law, they should state who was the most senior person in the company to know, why they decided people like Mark Zuckerberg didn’t need to know, and why they didn’t inform users at the time about the data breach. Facebook need to provide answers on these important points. These important issues would have remained hidden, were it not for people speaking out about them. Facebook’s response during our inquiry has been consistently slow and unsatisfactory.

The receivers of SCL elections should comply with the law and respond to the enforcement notice issued by the ICO. It is also disturbing that AIQ have failed to comply with their enforcement notice.

Facebook has been in the crosshairs of the ICO over other data protection issues, and not come out well.


Source: The Tech Crunch

Read More

Facebook quietly relaunches apps for Groups platform after lockdown

Posted by on Jul 3, 2018 in Apps, Cambridge Analytica, facebook groups, facebook platform, Mobile, Policy, Social, TC | 0 comments

Facebook is becoming a marketplace for enterprise apps that help Group admins manage their communities.

To protect itself and its users in the wake of the Cambridge Analytica scandal, Facebook locked down the Groups API for building apps for Groups. These apps had to go through a human-reviewed approval process, and lost access to Group member lists, plus the names and profile pics of people who posted. Now, approved Groups apps are reemerging on Facebook, accessible to admins through a new in-Facebook Groups apps browser that gives the platform control over discoverability.

Facebook confirmed the new Groups apps browser after our inquiry, telling TechCrunch, “What you’re seeing today is related to changes we announced in April that require developers to go through an updated app review process in order to use the Groups API. As part of this, some developers who have gone through the review process are now able to access the Groups API.”

Facebook wouldn’t comment further, but this Help Center article details how Groups can now add apps. Matt Navarra first spotted the new Groups apps option and tipped us off. Previously, admins would have to find Group management tools outside of Facebook and then use their logged-in Facebook account to give the app permissions to access their Group’s data.

Groups are often a labor of love for admins, but generate tons of engagement for the social network. That’s why the company recently began testing Facebook subscription Groups that allow admins to charge a monthly fee. With the right set of approved partners, the platform offers Group admins some of the capabilities usually reserved for big brands and businesses that pay for enterprise tools to manage their online presences.

Becoming a gateway to enterprise tool sets could make Facebook Groups more engaging, generating more time on site and ad views from users. This also positions Facebook as a natural home for ad campaigns promoting different enterprise tools. And one day, Facebook could potentially try to act more formally as a Groups App Store and try to take a cut of software-as-a-service subscription fees the tool makers charge.

Facebook can’t build every tool that admins might need, so in 2010 it launched the Groups API to enlist some outside help. Moderating comments, gathering analytics and posting pre-composed content were some of the popular capabilities of Facebook Groups apps. But in April, it halted use of the API, announcing that “there is information about people and conversations in groups that we want to make sure is better protected. Going forward, all third-party apps using the Groups API will need approval from Facebook and an admin to ensure they benefit the group.”

Now apps that have received the necessary approval are appearing in this Groups apps browser. It’s available to admins through their Group Settings page. The apps browser lets them pick from a selection of tools like Buffer and Sendible for scheduling posts to their Group, and others for handling commerce messages.

Facebook is still trying to bar the windows of its platform, ensuring there are no more easy ways to slurp up massive amounts of sensitive user data. Yesterday it shut down more APIs and standalone apps in what appears to be an attempt to streamline the platform so there are fewer points of risk and more staff to concentrate on safeguarding the most popular and powerful parts of its developer offering.

The Cambridge Analytica scandal has subsided to some degree, with Facebook’s share price recovering and user growth maintaining at standard levels. However, a new report from The Washington Post says the FBI, FTC and SEC will be investigating Facebook, Cambridge Analytica and the social network’s executives’ testimony to Congress. Facebook surely wants to get back to concentrating on product, not politics, but must take it slow and steady. There are too many eyes on it to move fast or break anything.


Source: The Tech Crunch

Read More