Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Targeted ads offer little extra value for online publishers, study suggests

Posted by on May 31, 2019 in Adtech, Advertising Tech, Alphabet, behavioral advertising, digital advertising, digital marketing, display advertising, Europe, Facebook, General Data Protection Regulation, IAB, Marketing, Media, Online Advertising, Privacy, programmatic advertising, Randall Rothenberg, Richard blumenthal, targeted advertising, United States | 0 comments

How much value do online publishers derive from behaviorally targeted advertising that uses privacy-hostile tracking technologies to determine which advert to show a website user?

A new piece of research suggests publishers make just 4% more vs if they were to serve a non-targeted ad.

It’s a finding that sheds suggestive light on why so many newsroom budgets are shrinking and journalists finding themselves out of work — even as adtech giants continue stuffing their coffers with massive profits.

Visit the average news website lousy with third party cookies (yes, we know, it’s true of TC too) and you’d be forgiven for thinking the publisher is also getting fat profits from the data creamed off their users as they plug into programmatic ad systems that trade info on Internet users’ browsing habits to determine the ad which gets displayed.

Yet while the online ad market is massive and growing — $88BN in revenues in the US in 2017, per IAB data, a 21% year-on-year increase — publishers are not the entities getting filthy rich off of their own content.

On the contrary, research in recent years has suggested that a large proportion of publishers are being squeezed by digital display advertising economics, with some 40% reporting either stagnant or shrinking ad revenue, per a 2015 Econsultancy study. (Hence, we can posit, the rise in publishers branching into subscriptions — TC’s own offering can be found here: Extra Crunch).

The lion’s share of value being created by digital advertising ends up in the coffers of adtech giants, Google and Facebook . Aka the adtech duopoly. In the US, the pair account for around 60% of digital ad market spending, per eMarketer — or circa $76.57BN.

Their annual revenues have mirrored overall growth in digital ad spend — rising from $74.9BN to $136.8BN, between 2015 and 2018, in the case of Google’s parent Alphabet; and $17.9BN to $55.8BN for Facebook. (While US online ad spend stepped up from $59.6BN to $107.5BN+ between 2015 and 2018.)

eMarketer projects 2019 will mark the first decline in the duopoly’s collective share. But not because publishers’ fortunes are suddenly set for a bonanza turnaround. Rather another tech giant — Amazon — has been growing its share of the digital ad market, and is expected to make what eMarketer dubs the start of “a small dent in the duopoly”.

Behavioral advertising — aka targeted ads — has come to dominate the online ad market, fuelled by platform dynamics encouraging a proliferation of tracking technologies and techniques in the unregulated background. And by, it seems, greater effectiveness from the perspective of online advertisers, as the paper notes. (“Despite measurement and attribution challenges… many studies seem to concur that targeted advertising is beneficial and effective for advertising firms.”

This has had the effect of squeezing out non-targeted display ads, such as those that rely on contextual factors to select the ad — e.g. the content being viewed, device type or location.

The latter are now the exception; a fall-back such as for when cookies have been blocked. (Albeit, one that veteran pro-privacy search engine, DuckDuckGo, has nonetheless turned into a profitable contextual ad business).

One 2017 study by IHS Markit, suggested that 86% of programmatic advertising in Europe was using behavioural data. While even a quarter (24%) of non-programmatic advertising was found to be using behavioural data, per its model. 

“In 2016, 90% of the digital display advertising market growth came from formats and processes that use behavioural data,” it observed, projecting growth of 106% for behaviourally targeted advertising between 2016 and 2020, and a decline of 63.6% for forms of digital advertising that don’t use such data.

The economic incentives to push behavioral advertising vs non-targeted ads look clear for dominant platforms that rely on amassing scale — across advertisers, other people’s eyeballs, content and behavioral data — to extract value from the Internet’s dispersed and diverse audience.

But the incentives for content producers to subject themselves — and their engaged communities of users — to these privacy-hostile economies of scale look a whole lot more fuzzy.

Concern about potential imbalances in the online ad market is also leading policymakers and regulators on both sides of the Atlantic to question the opacity of the market — and call for greater transparency.

A price on people tracking’s head

The new research, which will be presented at the Workshop on the Economics of Information Security conference in Boston next week, aims to contribute a new piece to this digital ad revenue puzzle by trying to quantify the value to a single publisher of choosing ads that are behaviorally targeted vs those that aren’t.

We’ve flagged the research before — when the findings were cited by one of the academics involved in the study at an FTC hearing — but the full paper has now been published.

It’s called Online Tracking and Publishers’ Revenues: An Empirical Analysis, and is co-authored by three academics: Veronica Marotta, an assistant professor in information and decision sciences at the Carlson School of Management, University of Minnesota; Vibhanshu Abhishek, associate professor of information systems at the Paul Merage School of Business, University California Irvine; and Alessandro Acquisti, professor of IT and public policy at Carnegie Mellon University.

“While the impact of targeted advertising on advertisers’ campaign effectiveness has been vastly documented, much less is known about the value generated by online tracking and targeting technologies for publishers – the websites that sell ad spaces,” the researchers write. “In fact, the conventional wisdom that publishers benefit too from behaviorally targeted advertising has rarely been scrutinized in academic studies.”

“As we briefly mention in the paper, notwithstanding claims about the shared benefits of online tracking and behaviorally targeting for multiple stakeholders (merchants, publishers, consumers, intermediaries…), there is a surprising paucity of empirical estimates of economic outcomes from independent researchers,”  Acquisti also tells us.

In fact, most of the estimates focus on the advertisers’ side of the market (for instance, there have been quite a few studies estimating the increase in click-through or conversion rates associated with targeted ads); much less is known about the publishers’ side of the market. So, going into the study, we were genuinely curious about what we may find, as there was little in terms of data that could anchor our predictions.

“We did have theoretical bases to make possible predictions, but those predictions could be quite antithetical. Under one story, targeting increases the value of the audience, which increases advertisers’ bids, which increases publishers’ revenues; under a different story, targeting decreases the ‘pool’ of audience interested in an ad, which decreases competition to display ads, which reduces advertisers’ bids, eventually reducing publishers’ revenues.”

For the study the researchers were provided with a data-set comprising “millions” of display ad transactions completed in a week across multiple online outlets owned by a single (unidentified) large publisher which operates websites in a range of verticals such as news, entertainment and fashion.

The data-set also included whether or not the site visitor’s cookie ID is available — enabling analysis of the price difference between behaviorally targeted and non-targeted ads. (The researchers used a statistical mechanism to control for systematic differences between users who impede cookies.)

As noted above, the top-line finding is only a very small gain for the publisher whose data they were analyzing — of around 4%. Or an average increase of $0.00008 per advertisement. 

It’s a finding that contrasts wildly with some of the loud yet unsubstantiated opinions which can be found being promulgated online — claiming the ‘vital necessity’ of behavorial ads to support publishers/journalism.

For example, this article, published earlier this month by a freelance journalist writing for The American Prospect, includes the claim that: “An online advertisement without a third-party cookie sells for just 2 percent of the cost of the same ad with the cookie.” Yet does not specify a source for the statistic it cites.

(The author told us the reference is to a 2018 speech made by Index Exchange’s Andrew Casale, when he suggested ad requests without a buyer ID receive 99% lower bids vs the same ad request with the identifier. She added that her conversations with people in the adtech industry had suggested a spread between a 99% and 97% decline in the value of an ad without a cookie, hence choosing a middle point.)

At the same time policymakers in the US now appear painfully aware how far behind Europe they are lagging where privacy regulation is concerned — and are fast dialling up their scrutiny of and verbal horror over how Internet users are tracked and profiled by adtech giants.

At a Senate Judiciary Committee hearing earlier this month — convened with the aim of “understanding the digital ad ecosystem and the impact of data privacy and competition policy” — the talk was not if to regulate big tech but how hard they must crack down on monopolistic ad giants.

“That’s what brings us here today. The lack of choice [for consumers to preserve their privacy online],” said senator Richard Blumenthal. “The excessive and extraordinary power of Google and Facebook and others who dominate the market is a fact of life. And so privacy protection is absolutely vital in the short run.”

The kind of “invasive surveillance” that the adtech industry systematically deploys is “something we would never tolerate from a government but Facebook and Google have the power of government never envisaged by our founders,” Blumenthal went on, before a few of the types of personal data that are sucked up and exploited by the adtech industrial surveillance complex: “Health, dating, location, finance, extremely personal details — offered to anyone with almost no restraint.”

Bearing that “invasive surveillance” in mind, a 4% publisher ‘premium’ for privacy-hostile ads vs adverts that are merely contextually served (and so don’t require pervasive tracking of web users) starts to look like a massive rip off — of both publisher brand and audience value, as well as Internet users’ rights and privacy.

Yes, targeted ads do appear to generate a small revenue increase, per the study. But as the researchers also point out that needs to be offset against the cost to publishers of complying with privacy regulations.

“If setting tracking cookies on visitors was cost free, the website would definitely be losing money. However, the widespread use of tracking cookies – and, more broadly, the practice of tracking users online – has been raising privacy concerns that have led to the adoption of stringent regulations, in particular in the European Union,” they write — going on to cite an estimate by the International Association of Privacy Professionals that Fortune’s Global 500 companies will spend around $7.8BN on compliant costs to meet the requirements of Europe’s General Data Protection Regulation (GDPR). 

Wider costs to systematically eroding online privacy are harder to put a value on for publishers. But should also be considered — whether it’s the costs to a brand reputation and user loyalty as a result of a publisher larding their sites with unwanted trackers; to wider societal costs — linked to the risks of data-fuelled manipulation and exploitation of vulnerable groups. Simply put, it’s not a good look.

Publishers may appear complicit in the asset stripping of their own content and audiences for what — per this study — seems only marginal gain, but the opacity of the adtech industry implies that most likely don’t realize exactly what kind of ‘deal’ they’re getting at the hands of the ad giants who grip them.

Which makes this research paper a very compelling read for the online publishing industry… and, well, a pretty awkward newsflash for anyone working in adtech.

 

While the study only provides a snapshot of ad market economics, as experienced by a single publisher, the glimpse it presents is distinctly different from the picture the adtech lobby has sought to paint, as it has ploughed money into arguing against privacy legislation — on the claimed grounds that ‘killing behavioural advertising would kill free online content’. 

Saying no more creepy ads might only marginally reduce publishers’ revenue doesn’t have quite the same doom-laden ring, clearly.

“In a nutshell, this study provides an initial data point on a portion of the advertising ecosystem over which claims had been made but little empirical verification was completed. The results highlight the need for more transparency over how the value generated by flows of data gets allocated to different stakeholders,” says Acquisti, summing up how the study should be read against the ad market as a whole.

Contacted for a response to the research, Randall Rothenberg, CEO of advertising business organization, the IAB, agreed that the digital supply chain is “too complex and too opaque” — and also expressed concern about how relatively little value generated by targeted ads is trickling down to publishers.

“One week’s worth of data from one unidentified publisher does not make for a projectible (sic) piece of research. Still, the study shows that targeted advertising creates immense value for brands — more than 90% of the unnamed publisher’s auctioned ads were sold with targeting attached, and advertisers were willing to pay a 60% premium for those ads. Yet very little of that value flowed to the publisher,” he told TechCrunch. “As IAB has been saying for a decade, the digital supply chain is too complex and too opaque, and this diversion of value is more proof that transparency is required so that publishers can benefit from the value they create.”

The research paper includes discussion of the limitations to the approach, as well as ideas for additional research work — such as looking at how the value of cookies changes depending on how much information they contain (on that they write of their initial findings: “Information seem to be very valuable (from the publisher’s perspective) when we compare cookies with very little information to cookies with some information; after a certain point, adding more information to a cookie does not seem to create additional value for the publisher”); and investigating how “the (un)availability of a cookie changes the competition in the auction” — to try to understand ad auction competition dynamics and the potential mechanisms at play.

“This is one new and hopefully useful data point, to which others must be added,” Acquisti also told us in concluding remarks. “The key to research work is incremental progress, with more studies progressively adding a clearer understanding of an issue, and we look forward to more research in this area.”

This report was updated with additional comment


Source: The Tech Crunch

Read More

Online platforms need a super regulator and public interest tests for mergers, says UK parliament report

Posted by on Mar 11, 2019 in antitrust, Artificial Intelligence, competition law, Europe, Facebook, GDPR, General Data Protection Regulation, Mark Zuckerberg, ofcom, online platforms, Policy, Privacy, Social, UK government, United Kingdom | 0 comments

The latest policy recommendations for regulating powerful Internet platforms comes from a U.K. House of Lord committee that’s calling for an overarching digital regulator to be set up to plug gaps in domestic legislation and work through any overlaps of rules.

“The digital world does not merely require more regulation but a different approach to regulation,” the committee writes in a report published on Saturday, saying the government has responded to “growing public concern” in a piecemeal fashion, whereas “a new framework for regulatory action is needed”.

It suggests a new body — which it’s dubbed the Digital Authority — be established to “instruct and coordinate regulators”.

“The Digital Authority would have the remit to continually assess regulation in the digital world and make recommendations on where additional powers are necessary to fill gaps,” the committee writes, saying that it would also “bring together non-statutory organisations with duties in this area” — so presumably bodies such as the recently created Centre for Data Ethics and Innovation (which is intended to advise the UK government on how it can harness technologies like AI for the public good).

The committee report sets out ten principles that it says the Digital Authority should use to “shape and frame” all Internet regulation — and develop a “comprehensive and holistic strategy” for regulating digital services.

These principles (listed below) read, rather unfortunately, like a list of big tech failures. Perhaps especially given Facebook founder Mark Zuckerberg’s repeat refusal to testify before another UK parliamentary committee last year. (Leading to another highly critical report.)

  • Parity: the same level of protection must be provided online as offline
  • Accountability: processes must be in place to ensure individuals and organisations are held to account for their actions and policies
  • Transparency: powerful businesses and organisations operating in the digital world must be open to scrutiny
  • Openness: the internet must remain open to innovation and competition
  • Privacy: to protect the privacy of individuals
  • Ethical design: services must act in the interests of users and society
  • Recognition of childhood: to protect the most vulnerable users of the internet
  • Respect for human rights and equality: to safeguard the freedoms of expression and information online
  • Education and awareness-raising: to enable people to navigate the digital world safely
  • Democratic accountability, proportionality and evidence-based approach

“Principles should guide the development of online services at every stage,” the committee urges, calling for greater transparency at the point data is collected; greater user choice over which data are taken; and greater transparency around data use — “including the use of algorithms”.

So, in other words, a reversal of the ‘opt-out if you want any privacy’ approach to settings that’s generally favored by tech giants — even as it’s being challenged by complaints filed under Europe’s GDPR.

The UK government is due to put out a policy White Paper on regulating online harms this winter. But the Lords Communications Committee suggests the government’s focus is too narrow, calling also for regulation that can intervene to address how “the digital world has become dominated by a small number of very large companies”.

“These companies enjoy a substantial advantage, operating with an unprecedented knowledge of users and other businesses,” it warns. “Without intervention the largest tech companies are likely to gain more control of technologies which disseminate media content, extract data from the home and individuals or make decisions affecting people’s lives.”

The committee recommends public interest tests should therefore be applied to potential acquisitions when tech giants move in to snap up startups, warning that current competition law is struggling to keep pace with the ‘winner takes all’ dynamic of digital markets and their network effects.

“The largest tech companies can buy start-up companies before they can become competitive,” it writes. “Responses based on competition law struggle to keep pace with digital markets and often take place only once irreversible damage is done. We recommend that the consumer welfare test needs to be broadened and a public interest test should be applied to data-driven mergers.”

Market concentration also means a small number of companies have “great power in society and act as gatekeepers to the internet”, it also warns, suggesting that while greater use of data portability can help, “more interoperability” is required for the measure to make an effective remedy.

The committee also examined online platforms’ current legal liabilities around content, and recommends beefing these up too — saying self-regulation is failing and calling out social media sites’ moderation processes specifically as “unacceptably opaque and slow”.

High level political pressure in the UK recently led to a major Instagram policy change around censoring content that promotes suicide — though the shift was triggered after a public outcry related to the suicide of a young schoolgirl who had been exposed to pro-suicide content on Instagram years before.

Like other UK committees and government advisors, the Lords committee wants online services which host user-generated content to be subject to a statutory duty of care — with a special focus on children and “the vulnerable in society”.

“The duty of care should ensure that providers take account of safety in designing their services to prevent harm. This should include providing appropriate moderation processes to handle complaints about content,” it writes, recommending telecoms regulator Ofcom is given responsibility for enforcement.

“Public opinion is growing increasingly intolerant of the abuses which big tech companies have failed to eliminate,” it adds. “We hope that the industry will welcome our 10 principles and their potential to help restore trust in the services they provide. It is in the industry’s own long-term interest to work constructively with policy-makers. If they fail to do so, they run the risk of further action being taken.”


Source: The Tech Crunch

Read More

Cookie walls don’t comply with GDPR, says Dutch DPA

Posted by on Mar 8, 2019 in Advertising Tech, cookie consent, cookie walls, data protection, data protection law, dutch dpa, Europe, GDPR, General Data Protection Regulation, Google, Online Advertising, Privacy, targeted advertising | 0 comments

Cookie walls that demand a website visitor agrees to their Internet browsing being tracked for ad-targeting as the ‘price’ of entry to the site are not compliant with European data protection law, the Dutch data protection agency clarified yesterday.

The DPA said it has received dozens of complaints from Internet users who had had their access to websites blocked after refusing to accept tracking cookies — so it has taken the step of publishing clear guidance on the issue.

It also says it will be stepping up monitoring, adding that it has written to the most complained about organizations (without naming any names) — instructing them to make changes to ensure they come into compliance with GDPR.

Europe’s General Data Protection Regulation, which came into force last May, tightens the rules around consent as a legal basis for processing personal data — requiring it to be specific, informed and freely given in order for it to be valid under the law.

Of course consent is not the only legal basis for processing personal data but many websites do rely on asking Internet visitors for consent to ad cookies as they arrive.

And the Dutch DPA’s guidance makes it clear Internet visitors must be asked for permission in advance for any tracking software to be placed — such as third party tracking cookies; tracking pixels; and browser fingerprinting tech — and that that permission must be freely obtained. Ergo, a free choice must be offered.

So, in other words, a ‘data for access’ cookie wall isn’t going to cut it. (Or, as the DPA puts it: “Permission is not ‘free’ if someone has no real or free choice. Or if the person cannot refuse giving permission without adverse consequences.”)

“This is not for nothing; website visitors must be able to trust that their personal data are properly protected,” it further writes in a clarification published on its website [translated via Google Translate].

“There is no objection to software for the proper functioning of the website and the general analysis of the visit on that site. More thorough monitoring and analysis of the behavior of website visitors and the sharing of this information with other parties is only allowed with permission. That permission must be completely free,” it adds. 

We’ve reached out to the DPA with questions.

In light of this ruling the cookie wall on the Internet Advertising Bureau (IAB)’s European site (screengrabbed below) looks like a textbook example of what not to do — given the online ad industry association is bundling multiple cookie uses (site functional cookies; site analytical cookies; and third party advertising cookies) under a single ‘I agree’ option.

It does not offer visitors any opt-outs at all. (Not even under the ‘More info’ or privacy policy options pictured below).

If the user does not click ‘I agree’ they cannot gain access to the IAB’s website. So there’s no free choice here. It’s agree or leave.

Clicking ‘More info’ brings up additional information about the purposes the IAB uses cookies for — where it states it is not using collected information to create “visitor profiles”.

However it notes it is using Google products, and explains that some of these use cookies that may collect visitors’ information for advertising — thereby bundling ad tracking into the provision of its website ‘service’.

Again the only ‘choice’ offered to site visitors is ‘I agree’ or to leave without gaining access to the website. Which means it’s not a free choice.

The IAB told us no data protection agencies had been in touch regarding its cookie wall.

Asked whether it intends to amend the cookie wall in light of the Dutch DPA’s guidance a spokeswoman said she wasn’t sure what the team planned to do yet — but she claimed GDPR does not “outright prohibit making access to a service conditional upon consent”; pointing also to the (2002) ePrivacy Directive which she claimed applies here, saying it “also includes recital language to the effect of saying that website content can be made conditional upon the well-informed acceptance of cookies”.

So the IAB’s position appears to be that the ePrivacy Directive trumps GDPR on this issue.

Though it’s not clear how they’ve arrived at that conclusion. (The fifteen+ year old ePrivacy Directive is also in the process of being updated — while the flagship GDPR only came into force last year.)

The portion of the ePrivacy Directive that the IAB appears to be referring to is recital 25 — which includes the following line:

Access to specific website content may still be made conditional on the well-informed acceptance of a cookie or similar device, if it is used for a legitimate purpose.

However “specific website content” is hardly the same as full site access, i.e. as is entirely blocked by their cookie wall.

The “legitimate purpose” point in the recital also provides a second caveat vis-a-vis making access conditional on accepting cookies — and the recital text includes an example of “facilita[ting] the provision of information society services” as such a legitimate purpose.

What are “information society services”? An earlier European directive defines this legal term as services that are “provided at a distance, electronically and at the individual request of a recipient” [emphasis ours] — suggesting it refers to Internet content that the user actually intends to access (i.e. the website itself), rather than ads that track them behind the scenes as they surf.

So, in other words, even per the outdated ePrivacy Directive, a site might be able to require consent for functional cookies from a user to access a portion of the site.

But that’s not the same as saying you can gate off an entire website unless the visitor agrees to their browsing being pervasively tracked by advertisers.

That’s not the kind of ‘service’ website visitors are looking for. 

Add to that, returning to present day Europe, the Dutch DPA has put out very clear guidance demolishing cookie walls.

The only sensible legal interpretation here is that the writing is on the wall for cookie walls.


Source: The Tech Crunch

Read More

What business leaders can learn from Jeff Bezos’ leaked texts

Posted by on Feb 17, 2019 in Column, computing, cryptography, data protection, data security, European Union, Facebook, General Data Protection Regulation, Google, human rights, jeff bezos, Microsoft, national security, online security, Oregon, Privacy, Ron Wyden, terms of service, United States, Wickr | 0 comments

The ‘below the belt selfie’ media circus surrounding Jeff Bezos has made encrypted communications top of mind among nervous executive handlers. Their assumption is that a product with serious cryptography like Wickr – where I work – or Signal could have helped help Mr. Bezos and Amazon avoid this drama.

It’s a good assumption, but a troubling conclusion.

I worry that moments like these will drag serious cryptography down to the level of the National Enquirer. I’m concerned that this media cycle may lead people to view privacy and cryptography as a safety net for billionaires rather than a transformative solution for data minimization and privacy.

We live in the chapter of computing when data is mostly unprotected because of corporate indifference. The leaders of our new economy – like the vast majority of society – value convenience and short-term gratification over the security and privacy of consumer, employee and corporate data.  

We cannot let this media cycle pass without recognizing that when corporate executives take a laissez-faire approach to digital privacy, their employees and organizations will follow suit.

Two recent examples illustrate the privacy indifference of our leaders…

  • The most powerful executive in the world is either indifferent to, or unaware that, unencrypted online flirtations would be accessed by nation states and competitors.
  • 2016 presidential campaigns were either indifferent to, or unaware that, unencrypted online communications detailing “off-the-record” correspondence with media and payments to adult actor(s) would be accessed by nation states and competitors.

If our leaders do not respect and understand online security and privacy, then their organizations will not make data protection a priority. It’s no surprise that we see a constant stream of large corporations and federal agencies breached by nation states and competitors. Who then can we look to for leadership?

GDPR is an early attempt by regulators to lead. The European Union enacted GDPR to ensure individuals own their data and enforce penalties on companies who do not protect personal data. It applies to all data processors, but the EU is clearly focused on sending a message to the large US based data processors – Amazon, Facebook, Google, Microsoft, etc. In January, France’s National Data Protection Commission sent a message by fining Google $57 million for breaching GDPR rules. It was an unprecedented fine that garnered international attention. However, we must remember that in 2018 Google’s revenues were greater than $300 million … per day! GPDR is, at best, an annoying speed-bump in the monetization strategy of large data processors.

It is through this lens that Senator Ron Wyden’s (Oregon) idealistic call for billions of dollars in corporate fines and jail time for executives who enable privacy breaches can be seen as reasonable. When record financial penalties are inconsequential it is logical to pursue other avenues to protect our data.

Real change will come when our leaders understand that data privacy and security can increase profitability and reliability. For example, the Compliance, Governance and Oversight Council reports that an enterprise will spend as much as $50 million to protect 10 petabytes of data, and that $34.5 million of this is spent on protecting data that should be deleted. Serious efficiencies are waiting to be realized and serious cryptography can help.  

So, thank you Mr. Bezos for igniting corporate interest in secure communications. Let’s hope this news cycle convinces our corporate leaders and elected officials to embrace data privacy, protection and minimization because it responsible, profitable and efficient. We need leaders and elected officials to set an example and respect their own data and privacy if we have any hope of their organizations to protect ours.


Source: The Tech Crunch

Read More

Is Europe closing in on an antitrust fix for surveillance technologists?

Posted by on Feb 10, 2019 in android, antitrust, competition law, data protection, data protection law, DCMS committee, digital media, EC, Europe, European Commission, European Union, Facebook, General Data Protection Regulation, Germany, Giovanni Buttarelli, Google, instagram, Margrethe Vestager, Messenger, photo sharing, Privacy, Social, Social Media, social networks, surveillance capitalism, TC, terms of service, United Kingdom, United States | 0 comments

The German Federal Cartel Office’s decision to order Facebook to change how it processes users’ personal data this week is a sign the antitrust tide could at last be turning against platform power.

One European Commission source we spoke to, who was commenting in a personal capacity, described it as “clearly pioneering” and “a big deal”, even without Facebook being fined a dime.

The FCO’s decision instead bans the social network from linking user data across different platforms it owns, unless it gains people’s consent (nor can it make use of its services contingent on such consent). Facebook is also prohibited from gathering and linking data on users from third party websites, such as via its tracking pixels and social plugins.

The order is not yet in force, and Facebook is appealing, but should it come into force the social network faces being de facto shrunk by having its platforms siloed at the data level.

To comply with the order Facebook would have to ask users to freely consent to being data-mined — which the company does not do at present.

Yes, Facebook could still manipulate the outcome it wants from users but doing so would open it to further challenge under EU data protection law, as its current approach to consent is already being challenged.

The EU’s updated privacy framework, GDPR, requires consent to be specific, informed and freely given. That standard supports challenges to Facebook’s (still fixed) entry ‘price’ to its social services. To play you still have to agree to hand over your personal data so it can sell your attention to advertisers. But legal experts contend that’s neither privacy by design nor default.

The only ‘alternative’ Facebook offers is to tell users they can delete their account. Not that doing so would stop the company from tracking you around the rest of the mainstream web anyway. Facebook’s tracking infrastructure is also embedded across the wider Internet so it profiles non-users too.

EU data protection regulators are still investigating a very large number of consent-related GDPR complaints.

But the German FCO, which said it liaised with privacy authorities during its investigation of Facebook’s data-gathering, has dubbed this type of behavior “exploitative abuse”, having also deemed the social service to hold a monopoly position in the German market.

So there are now two lines of legal attack — antitrust and privacy law — threatening Facebook (and indeed other adtech companies’) surveillance-based business model across Europe.

A year ago the German antitrust authority also announced a probe of the online advertising sector, responding to concerns about a lack of transparency in the market. Its work here is by no means done.

Data limits

The lack of a big flashy fine attached to the German FCO’s order against Facebook makes this week’s story less of a major headline than recent European Commission antitrust fines handed to Google — such as the record-breaking $5BN penalty issued last summer for anticompetitive behaviour linked to the Android mobile platform.

But the decision is arguably just as, if not more, significant, because of the structural remedies being ordered upon Facebook. These remedies have been likened to an internal break-up of the company — with enforced internal separation of its multiple platform products at the data level.

This of course runs counter to (ad) platform giants’ preferred trajectory, which has long been to tear modesty walls down; pool user data from multiple internal (and indeed external sources), in defiance of the notion of informed consent; and mine all that personal (and sensitive) stuff to build identity-linked profiles to train algorithms that predict (and, some contend, manipulate) individual behavior.

Because if you can predict what a person is going to do you can choose which advert to serve to increase the chance they’ll click. (Or as Mark Zuckerberg puts it: ‘Senator, we run ads.’)

This means that a regulatory intervention that interferes with an ad tech giant’s ability to pool and process personal data starts to look really interesting. Because a Facebook that can’t join data dots across its sprawling social empire — or indeed across the mainstream web — wouldn’t be such a massive giant in terms of data insights. And nor, therefore, surveillance oversight.

Each of its platforms would be forced to be a more discrete (and, well, discreet) kind of business.

Competing against data-siloed platforms with a common owner — instead of a single interlinked mega-surveillance-network — also starts to sound almost possible. It suggests a playing field that’s reset, if not entirely levelled.

(Whereas, in the case of Android, the European Commission did not order any specific remedies — allowing Google to come up with ‘fixes’ itself; and so to shape the most self-serving ‘fix’ it can think of.)

Meanwhile, just look at where Facebook is now aiming to get to: A technical unification of the backend of its different social products.

Such a merger would collapse even more walls and fully enmesh platforms that started life as entirely separate products before were folded into Facebook’s empire (also, let’s not forget, via surveillance-informed acquisitions).

Facebook’s plan to unify its products on a single backend platform looks very much like an attempt to throw up technical barriers to antitrust hammers. It’s at least harder to imagine breaking up a company if its multiple, separate products are merged onto one unified backend which functions to cross and combine data streams.

Set against Facebook’s sudden desire to technically unify its full-flush of dominant social networks (Facebook Messenger; Instagram; WhatsApp) is a rising drum-beat of calls for competition-based scrutiny of tech giants.

This has been building for years, as the market power — and even democracy-denting potential — of surveillance capitalism’s data giants has telescoped into view.

Calls to break up tech giants no longer carry a suggestive punch. Regulators are routinely asked whether it’s time. As the European Commission’s competition chief, Margrethe Vestager, was when she handed down Google’s latest massive antitrust fine last summer.

Her response then was that she wasn’t sure breaking Google up is the right answer — preferring to try remedies that might allow competitors to have a go, while also emphasizing the importance of legislating to ensure “transparency and fairness in the business to platform relationship”.

But it’s interesting that the idea of breaking up tech giants now plays so well as political theatre, suggesting that wildly successful consumer technology companies — which have long dined out on shiny convenience-based marketing claims, made ever so saccharine sweet via the lure of ‘free’ services — have lost a big chunk of their populist pull, dogged as they have been by so many scandals.

From terrorist content and hate speech, to election interference, child exploitation, bullying, abuse. There’s also the matter of how they arrange their tax affairs.

The public perception of tech giants has matured as the ‘costs’ of their ‘free’ services have scaled into view. The upstarts have also become the establishment. People see not a new generation of ‘cuddly capitalists’ but another bunch of multinationals; highly polished but remote money-making machines that take rather more than they give back to the societies they feed off.

Google’s trick of naming each Android iteration after a different sweet treat makes for an interesting parallel to the (also now shifting) public perceptions around sugar, following closer attention to health concerns. What does its sickly sweetness mask? And after the sugar tax, we now have politicians calling for a social media levy.

Just this week the deputy leader of the main opposition party in the UK called for setting up a standalone Internet regulatory with the power to break up tech monopolies.

Talking about breaking up well-oiled, wealth-concentration machines is being seen as a populist vote winner. And companies that political leaders used to flatter and seek out for PR opportunities find themselves treated as political punchbags; Called to attend awkward grilling by hard-grafting committees, or taken to vicious task verbally at the highest profile public podia. (Though some non-democratic heads of state are still keen to press tech giant flesh.)

In Europe, Facebook’s repeat snubs of the UK parliament’s requests last year for Zuckerberg to face policymakers’ questions certainly did not go unnoticed.

Zuckerberg’s empty chair at the DCMS committee has become both a symbol of the company’s failure to accept wider societal responsibility for its products, and an indication of market failure; the CEO so powerful he doesn’t feel answerable to anyone; neither his most vulnerable users nor their elected representatives. Hence UK politicians on both sides of the aisle making political capital by talking about cutting tech giants down to size.

The political fallout from the Cambridge Analytica scandal looks far from done.

Quite how a UK regulator could successfully swing a regulatory hammer to break up a global Internet giant such as Facebook which is headquartered in the U.S. is another matter. But policymakers have already crossed the rubicon of public opinion and are relishing talking up having a go.

That represents a sea-change vs the neoliberal consensus that allowed competition regulators to sit on their hands for more than a decade as technology upstarts quietly hoovered up people’s data and bagged rivals, and basically went about transforming themselves from highly scalable startups into market-distorting giants with Internet-scale data-nets to snag users and buy or block competing ideas.

The political spirit looks willing to go there, and now the mechanism for breaking platforms’ distorting hold on markets may also be shaping up.

The traditional antitrust remedy of breaking a company along its business lines still looks unwieldy when faced with the blistering pace of digital technology. The problem is delivering such a fix fast enough that the business hasn’t already reconfigured to route around the reset. 

Commission antitrust decisions on the tech beat have stepped up impressively in pace on Vestager’s watch. Yet it still feels like watching paper pushers wading through treacle to try and catch a sprinter. (And Europe hasn’t gone so far as trying to impose a platform break up.) 

But the German FCO decision against Facebook hints at an alternative way forward for regulating the dominance of digital monopolies: Structural remedies that focus on controlling access to data which can be relatively swiftly configured and applied.

Vestager, whose term as EC competition chief may be coming to its end this year (even if other Commission roles remain in potential and tantalizing contention), has championed this idea herself.

In an interview on BBC Radio 4’s Today program in December she poured cold water on the stock question about breaking tech giants up — saying instead the Commission could look at how larger firms got access to data and resources as a means of limiting their power. Which is exactly what the German FCO has done in its order to Facebook. 

At the same time, Europe’s updated data protection framework has gained the most attention for the size of the financial penalties that can be issued for major compliance breaches. But the regulation also gives data watchdogs the power to limit or ban processing. And that power could similarly be used to reshape a rights-eroding business model or snuff out such business entirely.

The merging of privacy and antitrust concerns is really just a reflection of the complexity of the challenge regulators now face trying to rein in digital monopolies. But they’re tooling up to meet that challenge.

Speaking in an interview with TechCrunch last fall, Europe’s data protection supervisor, Giovanni Buttarelli, told us the bloc’s privacy regulators are moving towards more joint working with antitrust agencies to respond to platform power. “Europe would like to speak with one voice, not only within data protection but by approaching this issue of digital dividend, monopolies in a better way — not per sectors,” he said. “But first joint enforcement and better co-operation is key.”

The German FCO’s decision represents tangible evidence of the kind of regulatory co-operation that could — finally — crack down on tech giants.

Blogging in support of the decision this week, Buttarelli asserted: “It is not necessary for competition authorities to enforce other areas of law; rather they need simply to identity where the most powerful undertakings are setting a bad example and damaging the interests of consumers.  Data protection authorities are able to assist in this assessment.”

He also had a prediction of his own for surveillance technologists, warning: “This case is the tip of the iceberg — all companies in the digital information ecosystem that rely on tracking, profiling and targeting should be on notice.”

So perhaps, at long last, the regulators have figured out how to move fast and break things.


Source: The Tech Crunch

Read More

Facebook warned over privacy risks of merging messaging platforms

Posted by on Feb 2, 2019 in antitrust, Apps, Brian Acton, business intelligence, data protection, e2e encryption, Europe, European Commission, Facebook, GDPR, General Data Protection Regulation, instagram, Ireland, Mark Zuckerberg, messaging apps, Privacy, Social, Social Media, WhatsApp | 0 comments

Facebook’s lead data protection regulator in Europe has asked the company for an “urgent briefing” regarding plans to integrate the underlying infrastructure of its three social messaging platforms.

In a statement posted to its website late last week the Irish Data Protection Commission writes: “Previous proposals to share data between Facebook companies have given rise to significant data protection concerns and the Irish DPC will be seeking early assurances that all such concerns will be fully taken into account by Facebook in further developing this proposal.”

Last week the New York Times broke the news that Facebook intends to unify the backend infrastructure of its three separate products, couching it as Facebook founder Mark Zuckerberg asserting control over acquisitions whose founders have since left the building.

Instagram founders, Kevin Systrom and Mike Krieger, left Facebook last year, as a result of rising tensions over reduced independence, according to our sources.

While WhatsApp’s founders left Facebook earlier, with Brian Acton departing in late 2017 and Jan Koum sticking it out until spring 2018. The pair reportedly clashed with Facebook execs over user privacy and differences over how to monetize the end-to-end encrypted platform.

Acton later said Facebook had coached him to tell European regulators assessing whether to approve the 2014 merger that it would be “really difficult” for the company to combine WhatsApp and Facebook user data.

In the event, Facebook went on to link accounts across the two platforms just two years after the acquisition closed. It was later hit with a $122M penalty from the European Commission for providing “incorrect or misleading” information at the time of the merger. Though Facebook claimed it had made unintentional “errors” in the 2014 filing.

A further couple of years on and Facebook has now graduated to seeking full platform unification of separate messaging products.

“We want to build the best messaging experiences we can; and people want messaging to be fast, simple, reliable and private,” a spokesperson told us when we asked for a response to the NYT report. “We’re working on making more of our messaging products end-to-end encrypted and considering ways to make it easier to reach friends and family across networks.”

“As you would expect, there is a lot of discussion and debate as we begin the long process of figuring out all the details of how this will work,” the spokesperson added, confirming the substance of the NYT report.

There certainly would be a lot of detail to be worked out. Not least the feasibility of legally merging user data across distinct products in Europe, where a controversial 2016 privacy u-turn by WhatsApp — when it suddenly announced it would after all share user data with parent company Facebook (despite previously saying it would never do so), including sharing data for marketing purposes — triggered swift regulatory intervention.

Facebook was forced to suspend marketing-related data flows in Europe. Though it has continued sharing data between WhatsApp and Facebook for security and business intelligence purposes, leading to the French data watchdog to issue a formal notice at the end of 2017 warning the latter transfers also lack a legal basis.

A court in Hamburg, Germany, also officially banned Facebook from using WhatsApp user data for its own purposes.

Early last year, following an investigation into the data-sharing u-turn, the UK’s data watchdog obtained an undertaking from WhatsApp that it would not share personal data with Facebook until the two services could do so in a way that’s compliant with the region’s strict privacy framework, the General Data Protection Regulation (GDPR).

Facebook only avoided a fine from the UK regulator because it froze data flows after the regulatory intervention. But the company clearly remains on watch — and any fresh moves to further integrate the platforms would trigger instant scrutiny, evidenced by the shot across the bows from the DPC in Ireland (Facebook’s international HQ is based in the country).

The 2016 WhatsApp-Facebook privacy u-turn also occurred prior to Europe’s GDPR coming into force. And the updated privacy framework includes a regime of substantially larger maximum fines for any violations.

Under the regulation watchdogs also have the power to ban companies from processing data. Which, in the case of a revenue-rich data-mining giant like Facebook, could be a far more potent disincentive than even a billion dollar fine.

We’ve reached out to Facebook for comment on the Irish DPC’s statement and will update this report with any response.

Here’s the full statement from the Irish watchdog:

While we understand that Facebook’s proposal to integrate the Facebook, WhatsApp and Instagram platforms is at a very early conceptual stage of development, the Irish DPC has asked Facebook Ireland for an urgent briefing on what is being proposed. The Irish DPC will be very closely scrutinising Facebook’s plans as they develop, particularly insofar as they involve the sharing and merging of personal data between different Facebook companies. Previous proposals to share data between Facebook companies have given rise to significant data protection concerns and the Irish DPC will be seeking early assurances that all such concerns will be fully taken into account by Facebook in further developing this proposal. It must be emphasised that ultimately the proposed integration can only occur in the EU if it is capable of meeting all of the requirements of the GDPR.

Facebook may be hoping that extending end-to-end encryption to Instagram as part of its planned integration effort, per the NYT report, could offer a technical route to stop any privacy regulators’ hammers from falling.

Though use of e2e encryption still does not shield metadata from being harvested. And metadata offers a rich source of inferences about individuals which, under EU law, would certainly constitute personal data. So even with robust encryption across the board of Instagram, Facebook and WhatsApp the unified messaging platforms could still collectively leak plenty of personal data to their data-mining parent.

Facebook’s apps are also not open source. So even WhatsApp, which uses the respected Signal Protocol for its e2e encryption, remains under its control — with no ability for external audits to verify exactly what happens to data inside the app (such as checking what data gets sent back to Facebook). Users still have to trust Facebook’s implementation but regulators might demand actual proof of bona fide messaging privacy.

Nonetheless, the push by Facebook to integrate separate messaging products onto a single unified platform could be a defensive strategy — intended to throw dust in the face of antitrust regulators as political scrutiny of its market position and power continues to crank up. Though it would certainly be an aggressive defence to more tightly knit separate platforms together.

But if the risk Facebook is trying to shrink is being forced, by competition regulators, to sell off one or two of its messaging platforms it may feel it has nothing to lose by making it technically harder to break its business apart.

At the time of the acquisitions of Instagram and WhatsApp Facebook promised autonomy to their founders. Zuckerberg has since changed his view, according to the NYT — believing integrating all three will increase the utility of each and thus provide a disincentive for users to abandon each service.

It may also be a hedge against any one of the three messaging platforms decreasing in popularity by furnishing the business with internal levers it can throw to try to artifically juice activity across a less popular app by encouraging cross-platform usage.

And given the staggering size of the Facebook messaging empire, which globally sprawls to 2.5BN+ humans, user resistance to centralized manipulation via having their buttons pushed to increase cross-platform engagement across Facebook’s business may be futile without regulatory intervention.


Source: The Tech Crunch

Read More

Everything you need to know about Facebook, Google’s app scandal

Posted by on Feb 1, 2019 in app-store, Apple, Apple App Store, Apps, Europe, Facebook, Federal Trade Commission, Finance, General Data Protection Regulation, Google, messaging apps, mobile devices, operating systems, Privacy, Security, Smartphones, Social Media, Sonos, United States | 0 comments

Facebook and Google landed in hot water with Apple this week after two investigations by TechCrunch revealed the misuse of internal-only certificates — leading to their revocation, which led to a day of downtime at the two tech giants.

Confused about what happened? Here’s everything you need to know.

How did all this start, and what happened?

On Monday, we revealed that Facebook was misusing an Apple-issued enterprise certificate that is only meant for companies to use to distribute internal, employee-only apps without having to go through the Apple App Store. But the social media giant used that certificate to sign an app that Facebook distributed outside the company, violating Apple’s rules.

The app, known simply as “Research,” allowed Facebook unparalleled access to all of the data flowing out of a device. This included access to some of the users’ most sensitive network data. Facebook paid users — including teenagers — $20 per month to install the app. But it wasn’t clear exactly what kind of data was being vacuumed up, or for what reason.

It turns out that the app was a repackaged app that was effectively banned from Apple’s App Store last year for collecting too much data on users.

Apple was angry that Facebook was misusing its special-issue enterprise certificates to push an app it already banned, and revoked it — rendering the app unable to open. But Facebook was using that same certificate to sign its other employee-only apps, effectively knocking them offline until Apple re-issued the certificate.

Then, it turned out Google was doing almost exactly the same thing with its Screenwise app, and Apple’s ban-hammer fell again.

What’s the controversy over these enterprise certificates and what can they do?

If you want to develop Apple apps, you have to abide by its rules — and Apple expressly makes companies agree to its terms.

A key rule is that Apple doesn’t allow app developers to bypass the App Store, where every app is vetted to ensure it’s as secure as it can be. It does, however, grant exceptions for enterprise developers, such as to companies that want to build apps that are only used internally by employees. Facebook and Google in this case signed up to be enterprise developers and agreed to Apple’s developer terms.

Each Apple-issued certificate grants companies permission to distribute apps they develop internally — including pre-release versions of the apps they make, for testing purposes. But these certificates aren’t allowed to be used for ordinary consumers, as they have to download apps through the App Store.

What’s a “root” certificate, and why is its access a big deal?

Because Facebook’s Research and Google’s Screenwise apps were distributed outside of Apple’s App Store, it required users to manually install the app — known as sideloading. That requires users to go through a convoluted few steps of downloading the app itself, and opening and trusting either Facebook or Google’s enterprise developer code-signing certificate, which is what allows the app to run.

Both companies required users after the app installed to agree to an additional configuration step — known as a VPN configuration profile — allowing all of the data flowing out of that user’s phone to funnel down a special tunnel that directs it all to either Facebook or Google, depending on which app you installed.

This is where the Facebook and Google cases differ.

Google’s app collected data and sent it off to Google for research purposes, but couldn’t access encrypted data — such as the content of any network traffic protected by HTTPS, as most apps in the App Store and internet websites are.

Facebook, however, went far further. Its users were asked to go through an additional step to trust an additional type of certificate at the “root” level of the phone. Trusting this Facebook Research root certificate authority allowed the social media giant to look at all of the encrypted traffic flowing out of the device — essentially what we call a “man-in-the-middle” attack. That allowed Facebook to sift through your messages, your emails and any other bit of data that leaves your phone. Only apps that use certificate pinning — which reject any certificate that isn’t its own — were protected, such as iMessage, Signal and additionally any other end-to-end encrypted solutions.

Facebook’s Research app requires Root Certificate access, which Facebook gather almost any piece of data transmitted by your phone (Image: supplied)

Google’s app might not have been able to look at encrypted traffic, but the company still flouted the rules — and had its separate enterprise developer code-signing certificate revoked anyway.

What data did Facebook have access to on iOS?

It’s hard to know for sure, but it definitely had access to more data than Google.

Facebook said its app was to help it “understand how people use their mobile devices.” In reality, at root traffic level, Facebook could have accessed any kind of data that left your phone.

Will Strafach, a security expert with whom we spoke for our story, said: “If Facebook makes full use of the level of access they are given by asking users to install the certificate, they will have the ability to continuously collect the following types of data: private messages in social media apps, chats from in instant messaging apps – including photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location tracking apps you may have installed.”

Remember: this isn’t “root” access to your phone, like jailbreaking, but root access to the network traffic.

How does this compare to the technical ways other market research programs work?

In fairness, these aren’t market research apps unique to Facebook or Google. Several other companies, like Nielsen and comScore, run similar programs, but neither ask users to install a VPN or provide root access to the network.

In any case, Facebook already has a lot of your data — as does Google. Even if the companies only wanted to look at your data in aggregate with other people, it can still hone in on who you talk to, when, for how long and, in some cases, what about. It might not have been such an explosive scandal had Facebook not spent the last year cleaning up after several security and privacy breaches.

Can they capture the data of people the phone owner interacts with?

In both cases, yes. In Google’s case, any unencrypted data that involves another person’s data could have been collected. In Facebook’s case, it goes far further — any data of yours that interacts with another person, such as an email or a message, could have been collected by Facebook’s app.

How many people did this affect?

It’s hard to know for sure. Neither Google nor Facebook have said how many users they have. Between them, it’s believed to be in the thousands. As for the employees affected by the app outages, Facebook has more than 35,000 employees and Google has more than 94,000 employees.

Why did internal apps at Facebook and Google break after Apple revoked the certificates?

You might own your Apple device, but Apple still gets to control what goes on it.

Apple can’t control Facebook’s root certificates, but it can control the enterprise certificates it issues. After Facebook was caught out, Apple said: “Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.” That meant any app that relied on Facebook’s enterprise certificate — including inside the company — would fail to load. That’s not just pre-release builds of Facebook, Instagram and WhatsApp that staff were working on, but reportedly the company’s travel and collaboration apps were down. In Google’s case, even its catering and lunch menu apps were down.

Facebook’s internal apps were down for about a day, while Google’s internal apps were down for a few hours. None of Facebook or Google’s consumer services were affected, however.

How are people viewing Apple in all this?

Nobody seems thrilled with Facebook or Google at the moment, but not many are happy with Apple, either. Even though Apple sells hardware and doesn’t use your data to profile you or serve you ads — like Facebook and Google do — some are uncomfortable with how much power Apple has over the customers — and enterprises — that use its devices.

In revoking Facebook and Google’s enterprise certificates and causing downtime, it has a knock-on effect internally.

Is this legal in the U.S.? What about in Europe with GDPR?

Well, it’s not illegal — at least in the U.S. Facebook says it gained consent from its users. The company even said its teenage users must obtain parental consent, even though it was easily skippable and no verification checks were made. It wasn’t even explicitly clear that the children who “consented” really understood how much privacy they were really handing over.

That could lead to major regulatory headaches down the line. “If it turns out that European teens have been participating in the research effort Facebook could face another barrage of complaints under the bloc’s General Data Protection Regulation (GDPR) — and the prospect of substantial fines if any local agencies determine it failed to live up to consent and ‘privacy by design’ requirements baked into the bloc’s privacy regime,” wrote TechCrunch’s Natasha Lomas.

Who else has been misusing certificates?

Don’t think that Facebook and Google are alone in this. It turns out that a lot of companies might be flouting the rules, too.

According to many finding companies on social media, Sonos uses enterprise certificates for its beta program, as does finance app Binance, as well as DoorDash for its fleet of contractors. It’s not known if Apple will also revoke their enterprise certificates.

What next?

It’s anybody’s guess, but don’t expect this situation to die down any time soon.

Facebook may face repercussions with Europe, as well as at home. Two U.S. senators, Mark Warner and Richard Blumenthal, have already called for action, accusing Facebook of “wiretapping teens.” The Federal Trade Commission may also investigate, if Blumenthal gets his way.


Source: The Tech Crunch

Read More

Youth-run agency AIESEC exposed over 4 million intern applications

Posted by on Jan 21, 2019 in Christmas, data protection, data security, Elasticsearch, Europe, European Union, General Data Protection Regulation, Security, SMS, Technology, world wide web | 0 comments

AIESEC, a non-profit that bills itself as the “world’s largest youth-run organization,” exposed more than four million intern applications with personal and sensitive information on a server without a password.

Bob Diachenko, an independent security researcher, found an unprotected Elasticsearch database containing the applications on January 11, a little under a month after the database was first exposed.

The database contained “opportunity applications” contained the applicant’s name, gender, date of birth, and the reasons why the person was applying for the internship, according to Diachenko’s blog post on SecurityDiscovery, shared exclusively with TechCrunch. The database also contains the date and time when an application was rejected.

AIESEC, which has more than 100,000 members in 126 countries, said the database was inadvertently exposed 20 days prior to Diachenko’s notification — just before Christmas — as part of an “infrastructure improvement project.”

The database was secured the same day of Diachenko’s private disclosure.

Laurin Stahl, AEISEC’s global vice president of platforms, confirmed the exposure to TechCrunch but claimed that no more than 40 users were affected.

Stahl said that the agency had “informed the users who would most likely be on the top of frequent search results” in the database — some 40 individuals, he said — after the agency found no large requests of data from unfamiliar IP addresses.

“Given the fact that the security researcher found the cluster, we informed the users who would most likely be on the top of frequent search results on all indices of the cluster,” said Stahl. “The investigation we did over the weekend showed that no more than 50 data records affecting 40 users were available in these results.”

Stahl said that the agency informed Dutch data protection authorities of the exposure three days after the exposure.

“Our platform and entire infrastructure is still hosted in the EU,” he said, despite its recently relocation to headquarters in Canadia.

Like companies and organizations, non-profits are not exempt from European rules where EU citizens’ data is collected, and can face a fine of up to €20 million or four percent — whichever is higher — of their global annual revenue for serious GDPR violations.

It’s the latest instance of an Elasticsearch instance going unprotected.

A massive database leaking millions of real-time SMS text message data was found and secured last year, a popular massage service, and phone contact lists on five million users from an exposed emoji app.


Source: The Tech Crunch

Read More

Feds like cryptocurrencies and blockchain tech and so should antitrust agencies

Posted by on Dec 13, 2018 in author, Bitcoin, blockchain, Column, computing, cryptocurrencies, decentralization, digital rights, economy, ethereum, fed, General Data Protection Regulation, Germany, human rights, money, Privacy, St. Louis | 0 comments

While statements and position papers from most central banks were generally skeptical of cryptocurrencies, the times may be changing.

Earlier this year, the Federal Reserve of Saint Louis published a study that relates the positive effects of cryptocurrencies for privacy protection.

Even with the precipitous decline in value of Bitcoin, Ethereum and other currencies, the Federal Reserve author emphasized the new competitive offering these currencies created exactly because of the way they function, and accordingly, why they are here to stay.

And antitrust authorities should welcome cryptocurrencies and blockchain technologies for the same reason.

Fact: crypto-currencies are good for (legitimate) privacy protection

In the July article from Federal Reserve research fellow Charles M. Kahn, cryptocurrencies were held up as an exemplar of a degree of privacy protection that not even the central banks can provide to customers.

Kahn further stressed that “privacy in payments is desired not just for illegal transactions, but also for protection from malfeasance or negligence by counterparties or by the payments system provider itself.”

The act of payment engages the liability of the person who makes it. As a consequence, parties insert numerous contractual clauses to limit their liability. This creates a real issue due to the fact that some “parties to the transaction are no longer able to support the lawyers’ fees necessary to uphold the arrangement.” Smart contracts may address this issue by automating conflict resolution, but for anyone who doesn’t have access to them, crypto-currencies solve the problem differently. They make it possible to make a transaction without revealing your identity.

Above all, crypto-currencies are a reaction to fears of privacy invasion, whether by governments or big companies, according to Kahn. And indeed, following Cambridge Analytica and fake news revelations, we are hearing more and more opinions expressing concerns. The General Data Protection Regulation is set to protect private citizens, but in practice, “more and more individuals will turn to payments technologies for privacy protection in specific transactions.” In this regard, cryptocurrencies provide an alternative solution that competes directly with what the market currently offers.

Consequence: blockchain is good for competition and consumers

Indeed, cryptocurrencies may be the least among many blockchain applications. The diffusion of data among a decentralized network that is independently verified by some or all of the network’s participating stakeholders is precisely the aspect of the technology that provides privacy protection and competes with applications outside the blockchain by offering a different kind of service.

The Fed of St. Louis’ study underlines that “because privacy needs are different in type and degree, we should expect a variety of platforms to emerge for specific purposes, and we should expect continued competition between traditional and start-up providers.”

And how not to love variety? In an era where antitrust authorities are increasingly interested in consumers’ privacy, crypto-currencies (and more generally blockchains) offer a much more effective protection than antitrust law and/or the GDPR combined.

These agencies should be happy about that, but they don’t say a word about it. That silence could lead to flawed judgements, because ignoring the speed of blockchain development — and its increasingly varied use — leads to misjudge the real nature of the competitive field.

And in fact, because they ignore the existence of blockchain (applications), they tend to engage in more and more procedures where privacy is seen as an antitrust concern (see what’s happening in Germany). But blockchain is actually providing an answer to this issue ; it can’t be said accordingly that the market is failing. And without a market failure, antitrust agencies’ intervention is not legitimate.

The roles of the fed and antitrust agencies could change

This new privacy offering from blockchain technologies should also lead to changes in the role of agencies. As the Fed study stressed:

“the future of central banks and payments authorities is no longer in privacy provision but in privacy regulation, in holding the ring as different payments platforms offer solutions appropriate to different niches with different mixes of expenses and safety, and with attention to different parts of the public’s demand for privacy.”

Some constituencies may criticize the expanding role of central banks in enforcing and ensuring privacy online, but those banks would be even harder pressed if they handled the task themselves instead of trying to relinquish it to the network.

The same applies to antitrust authorities. It is not for them to judge what the business model of digital companies should be and what degree of privacy protection they should offer. Their role is to ensure that alternatives exist, here, that blockchain can be deployed without misinformed regulation to slow it down.

Perhaps antitrust agencies should be more vocal about the benefits of cryptocurrencies and blockchain and advise governments not to prevent them.

After all, if even a Fed is now pro-crypto-currencies, antitrust regulators should jump on the wagon without fear. After all, blockchain creates a new alternative by offering real privacy protections, which ultimately put more power in the hands of consumers. If antitrust agencies can’t recognize that, we will soon ask ourselves: who are they really protecting?


Source: The Tech Crunch

Read More

How a small French privacy ruling could remake adtech for good

Posted by on Nov 20, 2018 in Adtech, Advertising Tech, data controller, data protection, digital media, DuckDuckGo, Europe, European Union, Facebook, General Data Protection Regulation, Google, iab europe, Ireland, Lawsuit, online ads, Online Advertising, Open Rights Group, Privacy, programmatic advertising, Real-time bidding, rtb, Security, Social, TC, United Kingdom, web browser | 0 comments

A ruling in late October against a little-known French adtech firm that popped up on the national data watchdog’s website earlier this month is causing ripples of excitement to run through privacy watchers in Europe who believe it signals the beginning of the end for creepy online ads.

The excitement is palpable.

Impressively so, given the dry CNIL decision against mobile “demand side platform” Vectaury was only published in the regulator’s native dense French legalese.

Digital advertising trade press AdExchanger picked up on the decision yesterday.

Here’s the killer paragraph from CNIL’s ruling — translated into “rough English” by my TC colleague Romain Dillet:

The requirement based on the article 7 above-mentioned isn’t fulfilled with a contractual clause that guarantees validly collected initial consent. The company VECTAURY should be able to show, for all data that it is processing, the validity of the expressed consent.

In plainer English, this is being interpreted by data experts as the regulator stating that consent to processing personal data cannot be gained through a framework arrangement which bundles a number of uses behind a single “I agree” button that, when clicked, passes consent to partners via a contractual relationship.

CNIL’s decision suggests that bundling consent to partner processing in a contract is not, in and of itself, valid consent under the European Union’s General Data Protection Regulation (GDPR) framework.

Consent under this regime must be specific, informed and freely given. It says as much in the text of GDPR.

But now, on top of that, the CNIL’s ruling suggests a data controller has to be able to demonstrate the validity of the consent — so cannot simply tuck consent inside a contractual “carpet-bag” that gets passed around to everyone else in their chain as soon as the user clicks “I agree.”

This is important, because many widely used digital advertising consent frameworks rolled out to websites in Europe this year — in claimed compliance with GDPR — are using a contractual route to obtain consent, and bundling partner processing behind often hideously labyrinthine consent flows.

The experience for web users in the EU right now is not great. But it could be leading to a much better internet down the road.

Where’s the consent for partner processing?

Even on a surface level the current crop of confusing consent mazes look problematic.

But the CNIL ruling suggests there are deeper and more structural problems lurking and embedded within. And as regulators dig in and start to unpick adtech contradictions it could force a change of mindset across the entire ecosystem.

As ever, when talking about consent and online ads the overarching point to remember is that no consumer given a genuine full disclosure about what’s being done with their personal data in the name of behavioral advertising would freely consent to personal details being hawked and traded across the web just so a bunch of third parties can bag a profit share.

This is why, despite GDPR being in force (since May 25), there are still so many tortuously confusing “consent flows” in play.

The longstanding online T&Cs trick of obfuscating and socially engineering consent remains an unfortunately standard playbook. But, less than six months into GDPR we’re still very much in a “phoney war” phase. More regulatory rulings are needed to lay down the rules by actually enforcing the law.

And CNIL’s recent activity suggests more to come.

In the Vectaury case, the mobile ad firm used a template framework for its consent flow that had been created by industry trade association and standards body, IAB Europe.

It did make some of its own choices, using its own wording on an initial consent screen and pre-ticking the purposes (another big GDPR no-no). But the bundling of data purposes behind a single opt in/out button is the core IAB Europe design. So CNIL’s ruling suggests there could be trouble ahead for other users of the template.

IAB Europe’s CEO, Townsend Feehan, told us it’s working on a statement reaction to the CNIL decision, but suggested Vectaury fell foul of the regulator because it may not have implemented the “Transparency & Consent Framework-compliant” consent management platform (CMP) framework — as it’s tortuously known — correctly.

So either “the ‘CMP’ that they implemented did not align to our Policies, or choices they could have made in the implementation of their CMP that would have facilitated compliance with the GDPR were not made,” she suggested to us via email.

Though that sidesteps the contractual crux point that’s really exciting privacy advocates — and making them point to the CNIL as having slammed the first of many unbolted doors.

The French watchdog has made a handful of other decisions in recent months, also involving geolocation-harvesting adtech firms, and also for processing data without consent.

So regulatory activity on the GDPR+adtech front has been ticking up.

Its decision to publish these rulings suggests it has wider concerns about the scale and privacy risks of current programmatic ad practices in the mobile space than can be attached to any single player.

So the suggestion is that just publishing the rulings looks intended to put the industry on notice…

Meanwhile, adtech giant Google has also made itself unpopular with publisher “partners” over its approach to GDPR by forcing them to collect consent on its behalf. And in May a group of European and international publishers complained that Google was imposing unfair terms on them.

The CNIL decision could sharpen that complaint too — raising questions over whether audits of publishers that Google said it would carry out will be enough for the arrangement to pass regulatory muster.

For a demand-side platform like Vectaury, which was acting on behalf of more than 32,000 partner mobile apps with user eyeballs to trade for ad cash, achieving GDPR compliance would mean either asking users for genuine consent and/or having a very large number of contracts on which it’s doing actual due diligence.

Yet Google is orders of magnitude more massive, of course.

The Vectaury file gives us a fascinating little glimpse into adtech “business as usual.” Business which also wasn’t, in the regulator’s view, legal.

The firm was harvesting a bunch of personal data (including people’s location and device IDs) on its partners’ mobile users via an SDK embedded in their apps, and receiving bids for these users’ eyeballs via another standard piece of the programmatic advertising pipe — ad exchanges and supply side platforms — which also get passed personal data so they can broadcast it widely via the online ad world’s real-time bidding (RTB) system. That’s to solicit potential advertisers’ bids for the attention of the individual app user… The wider the personal data gets spread, the more potential ad bids.

That scale is how programmatic works. It also looks horrible from a GDPR “privacy by design and default” standpoint.

The sprawling process of programmatic explains the very long list of “partners” nested non-transparently behind the average publisher’s online consent flow. The industry, as it is shaped now, literally trades on personal data.

So if the consent rug it’s been squatting on for years suddenly gets ripped out from underneath it, there would need to be radical reshaping of ad-targeting practices to avoid trampling on EU citizens’ fundamental right.

GDPR’s really big change was supersized fines. So ignoring the law would get very expensive.

Oh hai real-time bidding!

In Vectaury’s case, CNIL discovered the company was holding the personal data of a staggering 67.6 million people when it conducted an on-site inspection of the company in April 2018.

That already sounds like A LOT of data for a small mobile adtech player. Yet it might actually have been a tiny fraction of the personal data the company was routinely handling — given that Vectaury’s own website claims 70 percent of collected data is not stored.

In the decision there was no fine, but CNIL ordered the firm to delete all data it had not already deleted (having judged collection illegal given consent was not valid); and to stop processing data without consent.

But given the personal-data-based hinge of current-gen programmatic adtech, that essentially looks like an order to go out of business. (Or at least out of that business.)

And now we come to another interesting GDPR adtech complaint that’s not yet been ruled on by the two DPAs in question (Ireland and the U.K.) — but which looks even more compelling in light of the CNIL Vectaury decision because it picks at the adtech scab even more daringly.

Filed last month with the Irish Data Protection Commission and the U.K.’s ICO, this adtech complaint — the work of three individuals, Johnny Ryan of private web browser Brave; Jim Killock, exec director of digital and civil rights group, the Open Rights Group; and University College London data protection researcher, Michael Veale — targets the RTB system itself.

Here’s how Ryan, Killock and Veale summarized the complaint when they announced it last month:

Every time a person visits a website and is shown a “behavioural” ad on a website, intimate personal data that describes each visitor, and what they are watching online, is broadcast to tens or hundreds of companies. Advertising technology companies broadcast these data widely in order to solicit potential advertisers’ bids for the attention of the specific individual visiting the website.

A data breach occurs because this broadcast, known as an “bid request” in the online industry, fails to protect these intimate data against unauthorized access. Under the GDPR this is unlawful.

The GDPR, Article 5, paragraph 1, point f, requires that personal data be “processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss.” If you can not protect data in this way, then the GDPR says you can not process the data.

Ryan tells TechCrunch that the crux of the complaint is not related to the legal basis of the data sharing but rather focuses on the processing itself — arguing “that it itself is not adequately secure… that they’re aren’t adequate controls.”

Though he says there’s a consent element too, and so sees the CNIL ruling bolstering the RTB complaint. (On that keep in mind that CNIL judged Vectaury should not have been holding the RTB data of 67.6M people because it did not have valid consent.)

“We do pick up on the issue of consent in the complaint. And this particular CNIL decision has a bearing on both of those issues,” he argues. “It demonstrates in a concrete example that involved investigators going into physical premises and checking the machines — it demonstrates that even one small company was receiving tens of millions of people’s personal data in this illegal way.

“So the breach is very real. And it demonstrates that it’s not unreasonable to suggest that the consent is meaningless in any case.”

Reaching for a handy visual explainer, he continues: “If I leave a briefcase full of personal data in the middle of Charing Cross station at 11am and it’s really busy, that’s a breach. That would have been a breach back in the 1970s. If my business model is to drive up to Charing Cross station with a dump-truck and dump briefcases onto the street at 11am in the full knowledge that my business partners will all scramble around and try and grab them — and then to turn up at 11.01am and do the same thing. And then 11.02am. And every microsecond in between. That’s still a fucking data breach!

“It doesn’t matter if you think you’ve consent or anything else. You have to [comply with GDPR Article 5, paragraph 1, point f] in order to even be able to ask for a legal basis. There are plenty of other problems but that’s the biggest one that we highlighted. That’s our reason for saying this is a breach.”

“Now what CNIL has said is this company, Vectaury, was processing personal data that it did not lawfully have — and it got them through RTB,” he adds, spelling the point out. “So back to the GDPR — GDPR is saying you can’t process data in a way that doesn’t ensure protection against unauthorized or unlawful processing.”

In other words, RTB as a funnel for processing personal data looks to be on inherently shaky ground because it’s inherently putting all this personal data out there and at risk…

What’s bad for data brokers…

In another loop back, Ryan says the regulators have been in touch since their RTB complaint was filed to invite them to submit more information.

He says the CNIL Vectaury decision will be incorporated into further submissions, predicting: “This is going to be bounced around multiple regulators.”

The trio is keen to generate extra bounce by working with NGOs to enlist other individuals to file similar complaints in other EU Member States — to make the action a pan-European push, just like programmatic advertising itself.

“We now have the opportunity to connect our complaint with the excellent work that Privacy International has done, showing where these data end up, and with the excellent work that CNIL has done showing exactly how this actually applies. And this decision from CNIL takes, essentially my report that went with our complaint and shows exactly how that applies in the real world,” he continues.

“I was writing in the abstract — CNIL has now made a decision that is very much not in the abstract, it’s in the real world affecting millions of people… This will be a European-wide complaint.”

But what does programmatic advertising that doesn’t entail trading on people’s grubbily obtained personal data actually look like? If there were no personal data in bid requests Ryan believes quite a few things would happen. Such as, for e.g. the demise of clickbait.

“There would be no way to take your TechCrunch audience and buy it cheaper on some shitty website. There would be no more of that arbitrage stuff. Clickbait would die! All that nasty stuff would go away,” he suggests.

(And, well, full disclosure: We are TechCrunch — so we can confirm that does sound really great to us!)

He also reckons ad values would go up. Which would also be good news for publishers. (“Because the only place you could buy the TechCrunch audience would be on TechCrunch — that’s a really big deal!”)

He even suggests ad fraud might shrink because the incentives would shift. Or at least they could so long as the “worthy” publishers that are able to survive in the new ad world order don’t end up being complicit with bot fraud anyway.

As it stands, publishers are being screwed between the twin plates of the dominant adtech platforms (Google and Facebook), where they are having to give up a majority of their ad revenue — leaving the media industry with a shrinking slice of ad revenues (that can be as lean as ~30 percent).

That then has a knock on impact on funding newsrooms and quality journalism. And, well, on the wider web too — given all the weird incentives that operate in today’s big tech social media platform-dominated internet.

While a privacy-sucking programmatic monster is something only shadowy background data brokers that lack any meaningful relationships with the people whose data they’re feeding the beast could truly love.

And, well, Google and Facebook.

Ryan’s view is that the reason an adtech duopoly exists boils down to the “audience leakage” being enabled by RTB. Leakage which, in his view, also isn’t compliant with EU privacy laws.

He reckons the fix for this problem is equally simple: Keep doing RTB but without any personal data.

A real-time ad bidding system that’s been stripped of personal data does not mean no targeted ads. It could still support ad targeting based on real-time factors such as an approximate location (say to a city region) and/or generic and aggregated data.

Crucially it would not use unique identifiers that enable linking ad bids to a individual’s entire digital footprint and bid request history — as is the case now. Which essentially translates into: RIP privacy rights.

Ryan argues that RTB without personal data would still offer plenty of “value” to advertisers — who could still reach people based on general locations and via real-time interests. (It’s a model that sounds much like what privacy search engine DuckDuckGo is doing, and also been growing.)

The really big problem, though, is turning the behavioral ad tanker around. Given that the ecosystem is embedded, even as the duopoly milks it.

That’s also why Ryan is so hopeful now, though, having parsed the CNIL decision.

His reading is regulators will play a decisive role in pushing the ad industry’s trigger — and force through much-needed change in their targeting behavior.

“Unless the entire industry moves together, no one can be the first to remove personal data from bid requests but if the regulators step in in a big way… and say you’re all going to go out of business if you keep putting personal data into bid requests then everyone will come together — like the music industry was forced to eventually, under Steve Jobs,” he argues. “Everyone can together decide on a new short term disadvantageous but long term highly advantageous change.”

Of course such a radical reshaping is not going to happen overnight. Regulatory triggers tend to be slow motion unfoldings at the best of times. You also have to factor in the inexorable legal challenges.

But look closely and you’ll see both momentum massing behind privacy — and regulatory writing on the wall.

“Are we going to see programmatic forced to be non-personal and therefore better for every single citizen of the world (except, say, if they work for a data broker),” adds Ryan, posing his own concluding question. “Will that massive change, which will help society and the web… will that change happen before Christmas? No. But it’s worth working on. And it’s going to take some time.

“It could be two years from now that we have the finality. But a finality there will be. Detroit was only able to fight against regulation for so long. It does come.”

Who’d have though “taking back control” could ever sound so good?


Source: The Tech Crunch

Read More