Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Cookie walls don’t comply with GDPR, says Dutch DPA

Posted by on Mar 8, 2019 in Advertising Tech, cookie consent, cookie walls, data protection, data protection law, dutch dpa, Europe, GDPR, General Data Protection Regulation, Google, Online Advertising, Privacy, targeted advertising | 0 comments

Cookie walls that demand a website visitor agrees to their Internet browsing being tracked for ad-targeting as the ‘price’ of entry to the site are not compliant with European data protection law, the Dutch data protection agency clarified yesterday.

The DPA said it has received dozens of complaints from Internet users who had had their access to websites blocked after refusing to accept tracking cookies — so it has taken the step of publishing clear guidance on the issue.

It also says it will be stepping up monitoring, adding that it has written to the most complained about organizations (without naming any names) — instructing them to make changes to ensure they come into compliance with GDPR.

Europe’s General Data Protection Regulation, which came into force last May, tightens the rules around consent as a legal basis for processing personal data — requiring it to be specific, informed and freely given in order for it to be valid under the law.

Of course consent is not the only legal basis for processing personal data but many websites do rely on asking Internet visitors for consent to ad cookies as they arrive.

And the Dutch DPA’s guidance makes it clear Internet visitors must be asked for permission in advance for any tracking software to be placed — such as third party tracking cookies; tracking pixels; and browser fingerprinting tech — and that that permission must be freely obtained. Ergo, a free choice must be offered.

So, in other words, a ‘data for access’ cookie wall isn’t going to cut it. (Or, as the DPA puts it: “Permission is not ‘free’ if someone has no real or free choice. Or if the person cannot refuse giving permission without adverse consequences.”)

“This is not for nothing; website visitors must be able to trust that their personal data are properly protected,” it further writes in a clarification published on its website [translated via Google Translate].

“There is no objection to software for the proper functioning of the website and the general analysis of the visit on that site. More thorough monitoring and analysis of the behavior of website visitors and the sharing of this information with other parties is only allowed with permission. That permission must be completely free,” it adds. 

We’ve reached out to the DPA with questions.

In light of this ruling the cookie wall on the Internet Advertising Bureau (IAB)’s European site (screengrabbed below) looks like a textbook example of what not to do — given the online ad industry association is bundling multiple cookie uses (site functional cookies; site analytical cookies; and third party advertising cookies) under a single ‘I agree’ option.

It does not offer visitors any opt-outs at all. (Not even under the ‘More info’ or privacy policy options pictured below).

If the user does not click ‘I agree’ they cannot gain access to the IAB’s website. So there’s no free choice here. It’s agree or leave.

Clicking ‘More info’ brings up additional information about the purposes the IAB uses cookies for — where it states it is not using collected information to create “visitor profiles”.

However it notes it is using Google products, and explains that some of these use cookies that may collect visitors’ information for advertising — thereby bundling ad tracking into the provision of its website ‘service’.

Again the only ‘choice’ offered to site visitors is ‘I agree’ or to leave without gaining access to the website. Which means it’s not a free choice.

The IAB told us no data protection agencies had been in touch regarding its cookie wall.

Asked whether it intends to amend the cookie wall in light of the Dutch DPA’s guidance a spokeswoman said she wasn’t sure what the team planned to do yet — but she claimed GDPR does not “outright prohibit making access to a service conditional upon consent”; pointing also to the (2002) ePrivacy Directive which she claimed applies here, saying it “also includes recital language to the effect of saying that website content can be made conditional upon the well-informed acceptance of cookies”.

So the IAB’s position appears to be that the ePrivacy Directive trumps GDPR on this issue.

Though it’s not clear how they’ve arrived at that conclusion. (The fifteen+ year old ePrivacy Directive is also in the process of being updated — while the flagship GDPR only came into force last year.)

The portion of the ePrivacy Directive that the IAB appears to be referring to is recital 25 — which includes the following line:

Access to specific website content may still be made conditional on the well-informed acceptance of a cookie or similar device, if it is used for a legitimate purpose.

However “specific website content” is hardly the same as full site access, i.e. as is entirely blocked by their cookie wall.

The “legitimate purpose” point in the recital also provides a second caveat vis-a-vis making access conditional on accepting cookies — and the recital text includes an example of “facilita[ting] the provision of information society services” as such a legitimate purpose.

What are “information society services”? An earlier European directive defines this legal term as services that are “provided at a distance, electronically and at the individual request of a recipient” [emphasis ours] — suggesting it refers to Internet content that the user actually intends to access (i.e. the website itself), rather than ads that track them behind the scenes as they surf.

So, in other words, even per the outdated ePrivacy Directive, a site might be able to require consent for functional cookies from a user to access a portion of the site.

But that’s not the same as saying you can gate off an entire website unless the visitor agrees to their browsing being pervasively tracked by advertisers.

That’s not the kind of ‘service’ website visitors are looking for. 

Add to that, returning to present day Europe, the Dutch DPA has put out very clear guidance demolishing cookie walls.

The only sensible legal interpretation here is that the writing is on the wall for cookie walls.


Source: The Tech Crunch

Read More

What business leaders can learn from Jeff Bezos’ leaked texts

Posted by on Feb 17, 2019 in Column, computing, cryptography, data protection, data security, European Union, Facebook, General Data Protection Regulation, Google, human rights, jeff bezos, Microsoft, national security, online security, Oregon, Privacy, Ron Wyden, terms of service, United States, Wickr | 0 comments

The ‘below the belt selfie’ media circus surrounding Jeff Bezos has made encrypted communications top of mind among nervous executive handlers. Their assumption is that a product with serious cryptography like Wickr – where I work – or Signal could have helped help Mr. Bezos and Amazon avoid this drama.

It’s a good assumption, but a troubling conclusion.

I worry that moments like these will drag serious cryptography down to the level of the National Enquirer. I’m concerned that this media cycle may lead people to view privacy and cryptography as a safety net for billionaires rather than a transformative solution for data minimization and privacy.

We live in the chapter of computing when data is mostly unprotected because of corporate indifference. The leaders of our new economy – like the vast majority of society – value convenience and short-term gratification over the security and privacy of consumer, employee and corporate data.  

We cannot let this media cycle pass without recognizing that when corporate executives take a laissez-faire approach to digital privacy, their employees and organizations will follow suit.

Two recent examples illustrate the privacy indifference of our leaders…

  • The most powerful executive in the world is either indifferent to, or unaware that, unencrypted online flirtations would be accessed by nation states and competitors.
  • 2016 presidential campaigns were either indifferent to, or unaware that, unencrypted online communications detailing “off-the-record” correspondence with media and payments to adult actor(s) would be accessed by nation states and competitors.

If our leaders do not respect and understand online security and privacy, then their organizations will not make data protection a priority. It’s no surprise that we see a constant stream of large corporations and federal agencies breached by nation states and competitors. Who then can we look to for leadership?

GDPR is an early attempt by regulators to lead. The European Union enacted GDPR to ensure individuals own their data and enforce penalties on companies who do not protect personal data. It applies to all data processors, but the EU is clearly focused on sending a message to the large US based data processors – Amazon, Facebook, Google, Microsoft, etc. In January, France’s National Data Protection Commission sent a message by fining Google $57 million for breaching GDPR rules. It was an unprecedented fine that garnered international attention. However, we must remember that in 2018 Google’s revenues were greater than $300 million … per day! GPDR is, at best, an annoying speed-bump in the monetization strategy of large data processors.

It is through this lens that Senator Ron Wyden’s (Oregon) idealistic call for billions of dollars in corporate fines and jail time for executives who enable privacy breaches can be seen as reasonable. When record financial penalties are inconsequential it is logical to pursue other avenues to protect our data.

Real change will come when our leaders understand that data privacy and security can increase profitability and reliability. For example, the Compliance, Governance and Oversight Council reports that an enterprise will spend as much as $50 million to protect 10 petabytes of data, and that $34.5 million of this is spent on protecting data that should be deleted. Serious efficiencies are waiting to be realized and serious cryptography can help.  

So, thank you Mr. Bezos for igniting corporate interest in secure communications. Let’s hope this news cycle convinces our corporate leaders and elected officials to embrace data privacy, protection and minimization because it responsible, profitable and efficient. We need leaders and elected officials to set an example and respect their own data and privacy if we have any hope of their organizations to protect ours.


Source: The Tech Crunch

Read More

Is Europe closing in on an antitrust fix for surveillance technologists?

Posted by on Feb 10, 2019 in android, antitrust, competition law, data protection, data protection law, DCMS committee, digital media, EC, Europe, European Commission, European Union, Facebook, General Data Protection Regulation, Germany, Giovanni Buttarelli, Google, instagram, Margrethe Vestager, Messenger, photo sharing, Privacy, Social, Social Media, social networks, surveillance capitalism, TC, terms of service, United Kingdom, United States | 0 comments

The German Federal Cartel Office’s decision to order Facebook to change how it processes users’ personal data this week is a sign the antitrust tide could at last be turning against platform power.

One European Commission source we spoke to, who was commenting in a personal capacity, described it as “clearly pioneering” and “a big deal”, even without Facebook being fined a dime.

The FCO’s decision instead bans the social network from linking user data across different platforms it owns, unless it gains people’s consent (nor can it make use of its services contingent on such consent). Facebook is also prohibited from gathering and linking data on users from third party websites, such as via its tracking pixels and social plugins.

The order is not yet in force, and Facebook is appealing, but should it come into force the social network faces being de facto shrunk by having its platforms siloed at the data level.

To comply with the order Facebook would have to ask users to freely consent to being data-mined — which the company does not do at present.

Yes, Facebook could still manipulate the outcome it wants from users but doing so would open it to further challenge under EU data protection law, as its current approach to consent is already being challenged.

The EU’s updated privacy framework, GDPR, requires consent to be specific, informed and freely given. That standard supports challenges to Facebook’s (still fixed) entry ‘price’ to its social services. To play you still have to agree to hand over your personal data so it can sell your attention to advertisers. But legal experts contend that’s neither privacy by design nor default.

The only ‘alternative’ Facebook offers is to tell users they can delete their account. Not that doing so would stop the company from tracking you around the rest of the mainstream web anyway. Facebook’s tracking infrastructure is also embedded across the wider Internet so it profiles non-users too.

EU data protection regulators are still investigating a very large number of consent-related GDPR complaints.

But the German FCO, which said it liaised with privacy authorities during its investigation of Facebook’s data-gathering, has dubbed this type of behavior “exploitative abuse”, having also deemed the social service to hold a monopoly position in the German market.

So there are now two lines of legal attack — antitrust and privacy law — threatening Facebook (and indeed other adtech companies’) surveillance-based business model across Europe.

A year ago the German antitrust authority also announced a probe of the online advertising sector, responding to concerns about a lack of transparency in the market. Its work here is by no means done.

Data limits

The lack of a big flashy fine attached to the German FCO’s order against Facebook makes this week’s story less of a major headline than recent European Commission antitrust fines handed to Google — such as the record-breaking $5BN penalty issued last summer for anticompetitive behaviour linked to the Android mobile platform.

But the decision is arguably just as, if not more, significant, because of the structural remedies being ordered upon Facebook. These remedies have been likened to an internal break-up of the company — with enforced internal separation of its multiple platform products at the data level.

This of course runs counter to (ad) platform giants’ preferred trajectory, which has long been to tear modesty walls down; pool user data from multiple internal (and indeed external sources), in defiance of the notion of informed consent; and mine all that personal (and sensitive) stuff to build identity-linked profiles to train algorithms that predict (and, some contend, manipulate) individual behavior.

Because if you can predict what a person is going to do you can choose which advert to serve to increase the chance they’ll click. (Or as Mark Zuckerberg puts it: ‘Senator, we run ads.’)

This means that a regulatory intervention that interferes with an ad tech giant’s ability to pool and process personal data starts to look really interesting. Because a Facebook that can’t join data dots across its sprawling social empire — or indeed across the mainstream web — wouldn’t be such a massive giant in terms of data insights. And nor, therefore, surveillance oversight.

Each of its platforms would be forced to be a more discrete (and, well, discreet) kind of business.

Competing against data-siloed platforms with a common owner — instead of a single interlinked mega-surveillance-network — also starts to sound almost possible. It suggests a playing field that’s reset, if not entirely levelled.

(Whereas, in the case of Android, the European Commission did not order any specific remedies — allowing Google to come up with ‘fixes’ itself; and so to shape the most self-serving ‘fix’ it can think of.)

Meanwhile, just look at where Facebook is now aiming to get to: A technical unification of the backend of its different social products.

Such a merger would collapse even more walls and fully enmesh platforms that started life as entirely separate products before were folded into Facebook’s empire (also, let’s not forget, via surveillance-informed acquisitions).

Facebook’s plan to unify its products on a single backend platform looks very much like an attempt to throw up technical barriers to antitrust hammers. It’s at least harder to imagine breaking up a company if its multiple, separate products are merged onto one unified backend which functions to cross and combine data streams.

Set against Facebook’s sudden desire to technically unify its full-flush of dominant social networks (Facebook Messenger; Instagram; WhatsApp) is a rising drum-beat of calls for competition-based scrutiny of tech giants.

This has been building for years, as the market power — and even democracy-denting potential — of surveillance capitalism’s data giants has telescoped into view.

Calls to break up tech giants no longer carry a suggestive punch. Regulators are routinely asked whether it’s time. As the European Commission’s competition chief, Margrethe Vestager, was when she handed down Google’s latest massive antitrust fine last summer.

Her response then was that she wasn’t sure breaking Google up is the right answer — preferring to try remedies that might allow competitors to have a go, while also emphasizing the importance of legislating to ensure “transparency and fairness in the business to platform relationship”.

But it’s interesting that the idea of breaking up tech giants now plays so well as political theatre, suggesting that wildly successful consumer technology companies — which have long dined out on shiny convenience-based marketing claims, made ever so saccharine sweet via the lure of ‘free’ services — have lost a big chunk of their populist pull, dogged as they have been by so many scandals.

From terrorist content and hate speech, to election interference, child exploitation, bullying, abuse. There’s also the matter of how they arrange their tax affairs.

The public perception of tech giants has matured as the ‘costs’ of their ‘free’ services have scaled into view. The upstarts have also become the establishment. People see not a new generation of ‘cuddly capitalists’ but another bunch of multinationals; highly polished but remote money-making machines that take rather more than they give back to the societies they feed off.

Google’s trick of naming each Android iteration after a different sweet treat makes for an interesting parallel to the (also now shifting) public perceptions around sugar, following closer attention to health concerns. What does its sickly sweetness mask? And after the sugar tax, we now have politicians calling for a social media levy.

Just this week the deputy leader of the main opposition party in the UK called for setting up a standalone Internet regulatory with the power to break up tech monopolies.

Talking about breaking up well-oiled, wealth-concentration machines is being seen as a populist vote winner. And companies that political leaders used to flatter and seek out for PR opportunities find themselves treated as political punchbags; Called to attend awkward grilling by hard-grafting committees, or taken to vicious task verbally at the highest profile public podia. (Though some non-democratic heads of state are still keen to press tech giant flesh.)

In Europe, Facebook’s repeat snubs of the UK parliament’s requests last year for Zuckerberg to face policymakers’ questions certainly did not go unnoticed.

Zuckerberg’s empty chair at the DCMS committee has become both a symbol of the company’s failure to accept wider societal responsibility for its products, and an indication of market failure; the CEO so powerful he doesn’t feel answerable to anyone; neither his most vulnerable users nor their elected representatives. Hence UK politicians on both sides of the aisle making political capital by talking about cutting tech giants down to size.

The political fallout from the Cambridge Analytica scandal looks far from done.

Quite how a UK regulator could successfully swing a regulatory hammer to break up a global Internet giant such as Facebook which is headquartered in the U.S. is another matter. But policymakers have already crossed the rubicon of public opinion and are relishing talking up having a go.

That represents a sea-change vs the neoliberal consensus that allowed competition regulators to sit on their hands for more than a decade as technology upstarts quietly hoovered up people’s data and bagged rivals, and basically went about transforming themselves from highly scalable startups into market-distorting giants with Internet-scale data-nets to snag users and buy or block competing ideas.

The political spirit looks willing to go there, and now the mechanism for breaking platforms’ distorting hold on markets may also be shaping up.

The traditional antitrust remedy of breaking a company along its business lines still looks unwieldy when faced with the blistering pace of digital technology. The problem is delivering such a fix fast enough that the business hasn’t already reconfigured to route around the reset. 

Commission antitrust decisions on the tech beat have stepped up impressively in pace on Vestager’s watch. Yet it still feels like watching paper pushers wading through treacle to try and catch a sprinter. (And Europe hasn’t gone so far as trying to impose a platform break up.) 

But the German FCO decision against Facebook hints at an alternative way forward for regulating the dominance of digital monopolies: Structural remedies that focus on controlling access to data which can be relatively swiftly configured and applied.

Vestager, whose term as EC competition chief may be coming to its end this year (even if other Commission roles remain in potential and tantalizing contention), has championed this idea herself.

In an interview on BBC Radio 4’s Today program in December she poured cold water on the stock question about breaking tech giants up — saying instead the Commission could look at how larger firms got access to data and resources as a means of limiting their power. Which is exactly what the German FCO has done in its order to Facebook. 

At the same time, Europe’s updated data protection framework has gained the most attention for the size of the financial penalties that can be issued for major compliance breaches. But the regulation also gives data watchdogs the power to limit or ban processing. And that power could similarly be used to reshape a rights-eroding business model or snuff out such business entirely.

The merging of privacy and antitrust concerns is really just a reflection of the complexity of the challenge regulators now face trying to rein in digital monopolies. But they’re tooling up to meet that challenge.

Speaking in an interview with TechCrunch last fall, Europe’s data protection supervisor, Giovanni Buttarelli, told us the bloc’s privacy regulators are moving towards more joint working with antitrust agencies to respond to platform power. “Europe would like to speak with one voice, not only within data protection but by approaching this issue of digital dividend, monopolies in a better way — not per sectors,” he said. “But first joint enforcement and better co-operation is key.”

The German FCO’s decision represents tangible evidence of the kind of regulatory co-operation that could — finally — crack down on tech giants.

Blogging in support of the decision this week, Buttarelli asserted: “It is not necessary for competition authorities to enforce other areas of law; rather they need simply to identity where the most powerful undertakings are setting a bad example and damaging the interests of consumers.  Data protection authorities are able to assist in this assessment.”

He also had a prediction of his own for surveillance technologists, warning: “This case is the tip of the iceberg — all companies in the digital information ecosystem that rely on tracking, profiling and targeting should be on notice.”

So perhaps, at long last, the regulators have figured out how to move fast and break things.


Source: The Tech Crunch

Read More

Facebook warned over privacy risks of merging messaging platforms

Posted by on Feb 2, 2019 in antitrust, Apps, Brian Acton, business intelligence, data protection, e2e encryption, Europe, European Commission, Facebook, GDPR, General Data Protection Regulation, instagram, Ireland, Mark Zuckerberg, messaging apps, Privacy, Social, Social Media, WhatsApp | 0 comments

Facebook’s lead data protection regulator in Europe has asked the company for an “urgent briefing” regarding plans to integrate the underlying infrastructure of its three social messaging platforms.

In a statement posted to its website late last week the Irish Data Protection Commission writes: “Previous proposals to share data between Facebook companies have given rise to significant data protection concerns and the Irish DPC will be seeking early assurances that all such concerns will be fully taken into account by Facebook in further developing this proposal.”

Last week the New York Times broke the news that Facebook intends to unify the backend infrastructure of its three separate products, couching it as Facebook founder Mark Zuckerberg asserting control over acquisitions whose founders have since left the building.

Instagram founders, Kevin Systrom and Mike Krieger, left Facebook last year, as a result of rising tensions over reduced independence, according to our sources.

While WhatsApp’s founders left Facebook earlier, with Brian Acton departing in late 2017 and Jan Koum sticking it out until spring 2018. The pair reportedly clashed with Facebook execs over user privacy and differences over how to monetize the end-to-end encrypted platform.

Acton later said Facebook had coached him to tell European regulators assessing whether to approve the 2014 merger that it would be “really difficult” for the company to combine WhatsApp and Facebook user data.

In the event, Facebook went on to link accounts across the two platforms just two years after the acquisition closed. It was later hit with a $122M penalty from the European Commission for providing “incorrect or misleading” information at the time of the merger. Though Facebook claimed it had made unintentional “errors” in the 2014 filing.

A further couple of years on and Facebook has now graduated to seeking full platform unification of separate messaging products.

“We want to build the best messaging experiences we can; and people want messaging to be fast, simple, reliable and private,” a spokesperson told us when we asked for a response to the NYT report. “We’re working on making more of our messaging products end-to-end encrypted and considering ways to make it easier to reach friends and family across networks.”

“As you would expect, there is a lot of discussion and debate as we begin the long process of figuring out all the details of how this will work,” the spokesperson added, confirming the substance of the NYT report.

There certainly would be a lot of detail to be worked out. Not least the feasibility of legally merging user data across distinct products in Europe, where a controversial 2016 privacy u-turn by WhatsApp — when it suddenly announced it would after all share user data with parent company Facebook (despite previously saying it would never do so), including sharing data for marketing purposes — triggered swift regulatory intervention.

Facebook was forced to suspend marketing-related data flows in Europe. Though it has continued sharing data between WhatsApp and Facebook for security and business intelligence purposes, leading to the French data watchdog to issue a formal notice at the end of 2017 warning the latter transfers also lack a legal basis.

A court in Hamburg, Germany, also officially banned Facebook from using WhatsApp user data for its own purposes.

Early last year, following an investigation into the data-sharing u-turn, the UK’s data watchdog obtained an undertaking from WhatsApp that it would not share personal data with Facebook until the two services could do so in a way that’s compliant with the region’s strict privacy framework, the General Data Protection Regulation (GDPR).

Facebook only avoided a fine from the UK regulator because it froze data flows after the regulatory intervention. But the company clearly remains on watch — and any fresh moves to further integrate the platforms would trigger instant scrutiny, evidenced by the shot across the bows from the DPC in Ireland (Facebook’s international HQ is based in the country).

The 2016 WhatsApp-Facebook privacy u-turn also occurred prior to Europe’s GDPR coming into force. And the updated privacy framework includes a regime of substantially larger maximum fines for any violations.

Under the regulation watchdogs also have the power to ban companies from processing data. Which, in the case of a revenue-rich data-mining giant like Facebook, could be a far more potent disincentive than even a billion dollar fine.

We’ve reached out to Facebook for comment on the Irish DPC’s statement and will update this report with any response.

Here’s the full statement from the Irish watchdog:

While we understand that Facebook’s proposal to integrate the Facebook, WhatsApp and Instagram platforms is at a very early conceptual stage of development, the Irish DPC has asked Facebook Ireland for an urgent briefing on what is being proposed. The Irish DPC will be very closely scrutinising Facebook’s plans as they develop, particularly insofar as they involve the sharing and merging of personal data between different Facebook companies. Previous proposals to share data between Facebook companies have given rise to significant data protection concerns and the Irish DPC will be seeking early assurances that all such concerns will be fully taken into account by Facebook in further developing this proposal. It must be emphasised that ultimately the proposed integration can only occur in the EU if it is capable of meeting all of the requirements of the GDPR.

Facebook may be hoping that extending end-to-end encryption to Instagram as part of its planned integration effort, per the NYT report, could offer a technical route to stop any privacy regulators’ hammers from falling.

Though use of e2e encryption still does not shield metadata from being harvested. And metadata offers a rich source of inferences about individuals which, under EU law, would certainly constitute personal data. So even with robust encryption across the board of Instagram, Facebook and WhatsApp the unified messaging platforms could still collectively leak plenty of personal data to their data-mining parent.

Facebook’s apps are also not open source. So even WhatsApp, which uses the respected Signal Protocol for its e2e encryption, remains under its control — with no ability for external audits to verify exactly what happens to data inside the app (such as checking what data gets sent back to Facebook). Users still have to trust Facebook’s implementation but regulators might demand actual proof of bona fide messaging privacy.

Nonetheless, the push by Facebook to integrate separate messaging products onto a single unified platform could be a defensive strategy — intended to throw dust in the face of antitrust regulators as political scrutiny of its market position and power continues to crank up. Though it would certainly be an aggressive defence to more tightly knit separate platforms together.

But if the risk Facebook is trying to shrink is being forced, by competition regulators, to sell off one or two of its messaging platforms it may feel it has nothing to lose by making it technically harder to break its business apart.

At the time of the acquisitions of Instagram and WhatsApp Facebook promised autonomy to their founders. Zuckerberg has since changed his view, according to the NYT — believing integrating all three will increase the utility of each and thus provide a disincentive for users to abandon each service.

It may also be a hedge against any one of the three messaging platforms decreasing in popularity by furnishing the business with internal levers it can throw to try to artifically juice activity across a less popular app by encouraging cross-platform usage.

And given the staggering size of the Facebook messaging empire, which globally sprawls to 2.5BN+ humans, user resistance to centralized manipulation via having their buttons pushed to increase cross-platform engagement across Facebook’s business may be futile without regulatory intervention.


Source: The Tech Crunch

Read More

Youth-run agency AIESEC exposed over 4 million intern applications

Posted by on Jan 21, 2019 in Christmas, data protection, data security, Elasticsearch, Europe, European Union, General Data Protection Regulation, Security, SMS, Technology, world wide web | 0 comments

AIESEC, a non-profit that bills itself as the “world’s largest youth-run organization,” exposed more than four million intern applications with personal and sensitive information on a server without a password.

Bob Diachenko, an independent security researcher, found an unprotected Elasticsearch database containing the applications on January 11, a little under a month after the database was first exposed.

The database contained “opportunity applications” contained the applicant’s name, gender, date of birth, and the reasons why the person was applying for the internship, according to Diachenko’s blog post on SecurityDiscovery, shared exclusively with TechCrunch. The database also contains the date and time when an application was rejected.

AIESEC, which has more than 100,000 members in 126 countries, said the database was inadvertently exposed 20 days prior to Diachenko’s notification — just before Christmas — as part of an “infrastructure improvement project.”

The database was secured the same day of Diachenko’s private disclosure.

Laurin Stahl, AEISEC’s global vice president of platforms, confirmed the exposure to TechCrunch but claimed that no more than 40 users were affected.

Stahl said that the agency had “informed the users who would most likely be on the top of frequent search results” in the database — some 40 individuals, he said — after the agency found no large requests of data from unfamiliar IP addresses.

“Given the fact that the security researcher found the cluster, we informed the users who would most likely be on the top of frequent search results on all indices of the cluster,” said Stahl. “The investigation we did over the weekend showed that no more than 50 data records affecting 40 users were available in these results.”

Stahl said that the agency informed Dutch data protection authorities of the exposure three days after the exposure.

“Our platform and entire infrastructure is still hosted in the EU,” he said, despite its recently relocation to headquarters in Canadia.

Like companies and organizations, non-profits are not exempt from European rules where EU citizens’ data is collected, and can face a fine of up to €20 million or four percent — whichever is higher — of their global annual revenue for serious GDPR violations.

It’s the latest instance of an Elasticsearch instance going unprotected.

A massive database leaking millions of real-time SMS text message data was found and secured last year, a popular massage service, and phone contact lists on five million users from an exposed emoji app.


Source: The Tech Crunch

Read More

UK parliament seizes cache of internal Facebook documents to further privacy probe

Posted by on Nov 25, 2018 in Cambridge Analytica, Damian Collins, data misuse, data protection, DCMS committee, Elizabeth Denham, Europe, European Union, Facebook, fake news, Lawsuit, Mark Zuckerberg, Mike Schroepfer, online disinformation, Privacy, Richard Allan, Security, Social, Social Media, social media regulation, United Kingdom | 0 comments

Facebook founder Mark Zuckerberg may yet regret underestimating a UK parliamentary committee that’s been investigating the democracy-denting impact of online disinformation for the best part of this year — and whose repeat requests for facetime he’s just as repeatedly snubbed.

In the latest high gear change, reported in yesterday’s Observer, the committee has used parliamentary powers to seize a cache of documents pertaining to a US lawsuit to further its attempt to hold Facebook to account for misuse of user data.

Facebook’s oversight — or rather lack of it — where user data is concerned has been a major focus for the committee, as its enquiry into disinformation and data misuse has unfolded and scaled over the course of this year, ballooning in scope and visibility since the Cambridge Analytica story blew up into a global scandal this April.

The internal documents now in the committee’s possession are alleged to contain significant revelations about decisions made by Facebook senior management vis-a-vis data and privacy controls — including confidential emails between senior executives and correspondence with Zuckerberg himself.

This has been a key line of enquiry for parliamentarians. And an equally frustrating one — with committee members accusing Facebook of being deliberately misleading and concealing key details from it.

The seized files pertain to a US lawsuit that predates mainstream publicity around political misuse of Facebook data, with the suit filed in 2015, by a US startup called Six4Three, after Facebook removed developer access to friend data. (As we’ve previously reported Facebook was actually being warned about data risks related to its app permissions as far back as 2011 — yet it didn’t full shut down the friends data API until May 2015.)

The core complaint is an allegation that Facebook enticed developers to create apps for its platform by implying they would get long-term access to user data in return. So by later cutting data access the claim is that Facebook was effectively defrauding developers.

Since lodging the complaint, the plaintiffs have seized on the Cambridge Analytica saga to try to bolster their case.

And in a legal motion filed in May Six4Three’s lawyers claimed evidence they had uncovered demonstrated that “the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones”.

The startup used legal powers to obtain the cache of documents — which remain under seal on order of a California court. But the UK parliament used its own powers to swoop in and seize the files from the founder of Six4Three during a business trip to London when he came under the jurisdiction of UK law, compelling him to hand them over.

According to the Observer, parliament sent a serjeant at arms to the founder’s hotel — giving him a final warning and a two-hour deadline to comply with its order.

“When the software firm founder failed to do so, it’s understood he was escorted to parliament. He was told he risked fines and even imprisonment if he didn’t hand over the documents,” it adds, apparently revealing how Facebook lost control over some more data (albeit, its own this time).

In comments to the newspaper yesterday, DCMS committee chair Damian Collins said: “We are in uncharted territory. This is an unprecedented move but it’s an unprecedented situation. We’ve failed to get answers from Facebook and we believe the documents contain information of very high public interest.”

Collins later tweeted the Observer’s report on the seizure, teasing “more next week” — likely a reference to the grand committee hearing in parliament already scheduled for November 27.

But it could also be a hint the committee intends to reveal and/or make use of information locked up in the documents, as it puts questions to Facebook’s VP of policy solutions…

That said, the documents are subject to the Californian superior court’s seal order, so — as the Observer points out — cannot be shared or made public without risk of being found in contempt of court.

A spokesperson for Facebook made the same point, telling the newspaper: “The materials obtained by the DCMS committee are subject to a protective order of the San Mateo Superior Court restricting their disclosure. We have asked the DCMS committee to refrain from reviewing them and to return them to counsel or to Facebook. We have no further comment.”

Facebook’s spokesperson added that Six4Three’s “claims have no merit”, further asserting: “We will continue to defend ourselves vigorously.”

Earlier on Sunday, Facebook sent a response to Collins, which Guardian reporter Carole Cadwalladr posted soon after.

With the response, Facebook seems to be using the same tactics which were responsible for the latest round of criticism against the company — deny, delay, and dissemble. 

And, well, the irony of Facebook asking for its data to remain private also shouldn’t be lost on anyone at this point…

Another irony: In July, the Guardian reported that as part of Facebook’s defence against Six4Three’s suit the company had argued in court that it is a publisher — seeking to have what it couched as ‘editorial decisions’ about data access protected by the US’ first amendment.

Which is — to put it mildly — quite the contradiction, given Facebook’s long-standing public characterization of its business as just a distribution platform, never a media company.

So expect plenty of fireworks at next week’s public hearing as parliamentarians once again question Facebook over its various contradictory claims.

It’s also possible the committee will have been sent an internal email distribution list by then, detailing who at Facebook knew about the Cambridge Analytica breach in the earliest instance.

This list was obtained by the UK’s data watchdog, over the course of its own investigation into the data misuse saga. And earlier this month information commissioner Elizabeth Denham confirmed the ICO has the list and said it would pass it to the committee.

The accountability net does look to be closing in on Facebook management.

Even as Facebook continues to deny international parliaments any face-time with its founder and CEO (the EU parliament remains the sole exception).

Last week the company refused to even have Zuckerberg do a video call to take the committee’s questions — offering its VP of policy solutions, Richard Allan, to go before what’s now a grand committee comprised of representatives from seven international parliaments instead.

The grand committee hearing will take place in London on Tuesday morning, British time — followed by a press conference in which parliamentarians representing Facebook users from across the world will sign a set of ‘International Principles for the Law Governing the Internet’, making “a declaration on future action”.

So it’s also ‘watch this space’ where international social media regulation is concerned.

As noted above, Allan is just the latest stand-in for Zuckerberg. Back in April the DCMS committee spend the best part of five hours trying to extract answers from Facebook CTO, Mike Schroepfer.

“You are doing your best but the buck doesn’t stop with you does it? Where does the buck stop?” one committee member asked him then.

“It stops with Mark,” replied Schroepfer.

But Zuckerberg definitely won’t be stopping by on Tuesday.


Source: The Tech Crunch

Read More

How a small French privacy ruling could remake adtech for good

Posted by on Nov 20, 2018 in Adtech, Advertising Tech, data controller, data protection, digital media, DuckDuckGo, Europe, European Union, Facebook, General Data Protection Regulation, Google, iab europe, Ireland, Lawsuit, online ads, Online Advertising, Open Rights Group, Privacy, programmatic advertising, Real-time bidding, rtb, Security, Social, TC, United Kingdom, web browser | 0 comments

A ruling in late October against a little-known French adtech firm that popped up on the national data watchdog’s website earlier this month is causing ripples of excitement to run through privacy watchers in Europe who believe it signals the beginning of the end for creepy online ads.

The excitement is palpable.

Impressively so, given the dry CNIL decision against mobile “demand side platform” Vectaury was only published in the regulator’s native dense French legalese.

Digital advertising trade press AdExchanger picked up on the decision yesterday.

Here’s the killer paragraph from CNIL’s ruling — translated into “rough English” by my TC colleague Romain Dillet:

The requirement based on the article 7 above-mentioned isn’t fulfilled with a contractual clause that guarantees validly collected initial consent. The company VECTAURY should be able to show, for all data that it is processing, the validity of the expressed consent.

In plainer English, this is being interpreted by data experts as the regulator stating that consent to processing personal data cannot be gained through a framework arrangement which bundles a number of uses behind a single “I agree” button that, when clicked, passes consent to partners via a contractual relationship.

CNIL’s decision suggests that bundling consent to partner processing in a contract is not, in and of itself, valid consent under the European Union’s General Data Protection Regulation (GDPR) framework.

Consent under this regime must be specific, informed and freely given. It says as much in the text of GDPR.

But now, on top of that, the CNIL’s ruling suggests a data controller has to be able to demonstrate the validity of the consent — so cannot simply tuck consent inside a contractual “carpet-bag” that gets passed around to everyone else in their chain as soon as the user clicks “I agree.”

This is important, because many widely used digital advertising consent frameworks rolled out to websites in Europe this year — in claimed compliance with GDPR — are using a contractual route to obtain consent, and bundling partner processing behind often hideously labyrinthine consent flows.

The experience for web users in the EU right now is not great. But it could be leading to a much better internet down the road.

Where’s the consent for partner processing?

Even on a surface level the current crop of confusing consent mazes look problematic.

But the CNIL ruling suggests there are deeper and more structural problems lurking and embedded within. And as regulators dig in and start to unpick adtech contradictions it could force a change of mindset across the entire ecosystem.

As ever, when talking about consent and online ads the overarching point to remember is that no consumer given a genuine full disclosure about what’s being done with their personal data in the name of behavioral advertising would freely consent to personal details being hawked and traded across the web just so a bunch of third parties can bag a profit share.

This is why, despite GDPR being in force (since May 25), there are still so many tortuously confusing “consent flows” in play.

The longstanding online T&Cs trick of obfuscating and socially engineering consent remains an unfortunately standard playbook. But, less than six months into GDPR we’re still very much in a “phoney war” phase. More regulatory rulings are needed to lay down the rules by actually enforcing the law.

And CNIL’s recent activity suggests more to come.

In the Vectaury case, the mobile ad firm used a template framework for its consent flow that had been created by industry trade association and standards body, IAB Europe.

It did make some of its own choices, using its own wording on an initial consent screen and pre-ticking the purposes (another big GDPR no-no). But the bundling of data purposes behind a single opt in/out button is the core IAB Europe design. So CNIL’s ruling suggests there could be trouble ahead for other users of the template.

IAB Europe’s CEO, Townsend Feehan, told us it’s working on a statement reaction to the CNIL decision, but suggested Vectaury fell foul of the regulator because it may not have implemented the “Transparency & Consent Framework-compliant” consent management platform (CMP) framework — as it’s tortuously known — correctly.

So either “the ‘CMP’ that they implemented did not align to our Policies, or choices they could have made in the implementation of their CMP that would have facilitated compliance with the GDPR were not made,” she suggested to us via email.

Though that sidesteps the contractual crux point that’s really exciting privacy advocates — and making them point to the CNIL as having slammed the first of many unbolted doors.

The French watchdog has made a handful of other decisions in recent months, also involving geolocation-harvesting adtech firms, and also for processing data without consent.

So regulatory activity on the GDPR+adtech front has been ticking up.

Its decision to publish these rulings suggests it has wider concerns about the scale and privacy risks of current programmatic ad practices in the mobile space than can be attached to any single player.

So the suggestion is that just publishing the rulings looks intended to put the industry on notice…

Meanwhile, adtech giant Google has also made itself unpopular with publisher “partners” over its approach to GDPR by forcing them to collect consent on its behalf. And in May a group of European and international publishers complained that Google was imposing unfair terms on them.

The CNIL decision could sharpen that complaint too — raising questions over whether audits of publishers that Google said it would carry out will be enough for the arrangement to pass regulatory muster.

For a demand-side platform like Vectaury, which was acting on behalf of more than 32,000 partner mobile apps with user eyeballs to trade for ad cash, achieving GDPR compliance would mean either asking users for genuine consent and/or having a very large number of contracts on which it’s doing actual due diligence.

Yet Google is orders of magnitude more massive, of course.

The Vectaury file gives us a fascinating little glimpse into adtech “business as usual.” Business which also wasn’t, in the regulator’s view, legal.

The firm was harvesting a bunch of personal data (including people’s location and device IDs) on its partners’ mobile users via an SDK embedded in their apps, and receiving bids for these users’ eyeballs via another standard piece of the programmatic advertising pipe — ad exchanges and supply side platforms — which also get passed personal data so they can broadcast it widely via the online ad world’s real-time bidding (RTB) system. That’s to solicit potential advertisers’ bids for the attention of the individual app user… The wider the personal data gets spread, the more potential ad bids.

That scale is how programmatic works. It also looks horrible from a GDPR “privacy by design and default” standpoint.

The sprawling process of programmatic explains the very long list of “partners” nested non-transparently behind the average publisher’s online consent flow. The industry, as it is shaped now, literally trades on personal data.

So if the consent rug it’s been squatting on for years suddenly gets ripped out from underneath it, there would need to be radical reshaping of ad-targeting practices to avoid trampling on EU citizens’ fundamental right.

GDPR’s really big change was supersized fines. So ignoring the law would get very expensive.

Oh hai real-time bidding!

In Vectaury’s case, CNIL discovered the company was holding the personal data of a staggering 67.6 million people when it conducted an on-site inspection of the company in April 2018.

That already sounds like A LOT of data for a small mobile adtech player. Yet it might actually have been a tiny fraction of the personal data the company was routinely handling — given that Vectaury’s own website claims 70 percent of collected data is not stored.

In the decision there was no fine, but CNIL ordered the firm to delete all data it had not already deleted (having judged collection illegal given consent was not valid); and to stop processing data without consent.

But given the personal-data-based hinge of current-gen programmatic adtech, that essentially looks like an order to go out of business. (Or at least out of that business.)

And now we come to another interesting GDPR adtech complaint that’s not yet been ruled on by the two DPAs in question (Ireland and the U.K.) — but which looks even more compelling in light of the CNIL Vectaury decision because it picks at the adtech scab even more daringly.

Filed last month with the Irish Data Protection Commission and the U.K.’s ICO, this adtech complaint — the work of three individuals, Johnny Ryan of private web browser Brave; Jim Killock, exec director of digital and civil rights group, the Open Rights Group; and University College London data protection researcher, Michael Veale — targets the RTB system itself.

Here’s how Ryan, Killock and Veale summarized the complaint when they announced it last month:

Every time a person visits a website and is shown a “behavioural” ad on a website, intimate personal data that describes each visitor, and what they are watching online, is broadcast to tens or hundreds of companies. Advertising technology companies broadcast these data widely in order to solicit potential advertisers’ bids for the attention of the specific individual visiting the website.

A data breach occurs because this broadcast, known as an “bid request” in the online industry, fails to protect these intimate data against unauthorized access. Under the GDPR this is unlawful.

The GDPR, Article 5, paragraph 1, point f, requires that personal data be “processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss.” If you can not protect data in this way, then the GDPR says you can not process the data.

Ryan tells TechCrunch that the crux of the complaint is not related to the legal basis of the data sharing but rather focuses on the processing itself — arguing “that it itself is not adequately secure… that they’re aren’t adequate controls.”

Though he says there’s a consent element too, and so sees the CNIL ruling bolstering the RTB complaint. (On that keep in mind that CNIL judged Vectaury should not have been holding the RTB data of 67.6M people because it did not have valid consent.)

“We do pick up on the issue of consent in the complaint. And this particular CNIL decision has a bearing on both of those issues,” he argues. “It demonstrates in a concrete example that involved investigators going into physical premises and checking the machines — it demonstrates that even one small company was receiving tens of millions of people’s personal data in this illegal way.

“So the breach is very real. And it demonstrates that it’s not unreasonable to suggest that the consent is meaningless in any case.”

Reaching for a handy visual explainer, he continues: “If I leave a briefcase full of personal data in the middle of Charing Cross station at 11am and it’s really busy, that’s a breach. That would have been a breach back in the 1970s. If my business model is to drive up to Charing Cross station with a dump-truck and dump briefcases onto the street at 11am in the full knowledge that my business partners will all scramble around and try and grab them — and then to turn up at 11.01am and do the same thing. And then 11.02am. And every microsecond in between. That’s still a fucking data breach!

“It doesn’t matter if you think you’ve consent or anything else. You have to [comply with GDPR Article 5, paragraph 1, point f] in order to even be able to ask for a legal basis. There are plenty of other problems but that’s the biggest one that we highlighted. That’s our reason for saying this is a breach.”

“Now what CNIL has said is this company, Vectaury, was processing personal data that it did not lawfully have — and it got them through RTB,” he adds, spelling the point out. “So back to the GDPR — GDPR is saying you can’t process data in a way that doesn’t ensure protection against unauthorized or unlawful processing.”

In other words, RTB as a funnel for processing personal data looks to be on inherently shaky ground because it’s inherently putting all this personal data out there and at risk…

What’s bad for data brokers…

In another loop back, Ryan says the regulators have been in touch since their RTB complaint was filed to invite them to submit more information.

He says the CNIL Vectaury decision will be incorporated into further submissions, predicting: “This is going to be bounced around multiple regulators.”

The trio is keen to generate extra bounce by working with NGOs to enlist other individuals to file similar complaints in other EU Member States — to make the action a pan-European push, just like programmatic advertising itself.

“We now have the opportunity to connect our complaint with the excellent work that Privacy International has done, showing where these data end up, and with the excellent work that CNIL has done showing exactly how this actually applies. And this decision from CNIL takes, essentially my report that went with our complaint and shows exactly how that applies in the real world,” he continues.

“I was writing in the abstract — CNIL has now made a decision that is very much not in the abstract, it’s in the real world affecting millions of people… This will be a European-wide complaint.”

But what does programmatic advertising that doesn’t entail trading on people’s grubbily obtained personal data actually look like? If there were no personal data in bid requests Ryan believes quite a few things would happen. Such as, for e.g. the demise of clickbait.

“There would be no way to take your TechCrunch audience and buy it cheaper on some shitty website. There would be no more of that arbitrage stuff. Clickbait would die! All that nasty stuff would go away,” he suggests.

(And, well, full disclosure: We are TechCrunch — so we can confirm that does sound really great to us!)

He also reckons ad values would go up. Which would also be good news for publishers. (“Because the only place you could buy the TechCrunch audience would be on TechCrunch — that’s a really big deal!”)

He even suggests ad fraud might shrink because the incentives would shift. Or at least they could so long as the “worthy” publishers that are able to survive in the new ad world order don’t end up being complicit with bot fraud anyway.

As it stands, publishers are being screwed between the twin plates of the dominant adtech platforms (Google and Facebook), where they are having to give up a majority of their ad revenue — leaving the media industry with a shrinking slice of ad revenues (that can be as lean as ~30 percent).

That then has a knock on impact on funding newsrooms and quality journalism. And, well, on the wider web too — given all the weird incentives that operate in today’s big tech social media platform-dominated internet.

While a privacy-sucking programmatic monster is something only shadowy background data brokers that lack any meaningful relationships with the people whose data they’re feeding the beast could truly love.

And, well, Google and Facebook.

Ryan’s view is that the reason an adtech duopoly exists boils down to the “audience leakage” being enabled by RTB. Leakage which, in his view, also isn’t compliant with EU privacy laws.

He reckons the fix for this problem is equally simple: Keep doing RTB but without any personal data.

A real-time ad bidding system that’s been stripped of personal data does not mean no targeted ads. It could still support ad targeting based on real-time factors such as an approximate location (say to a city region) and/or generic and aggregated data.

Crucially it would not use unique identifiers that enable linking ad bids to a individual’s entire digital footprint and bid request history — as is the case now. Which essentially translates into: RIP privacy rights.

Ryan argues that RTB without personal data would still offer plenty of “value” to advertisers — who could still reach people based on general locations and via real-time interests. (It’s a model that sounds much like what privacy search engine DuckDuckGo is doing, and also been growing.)

The really big problem, though, is turning the behavioral ad tanker around. Given that the ecosystem is embedded, even as the duopoly milks it.

That’s also why Ryan is so hopeful now, though, having parsed the CNIL decision.

His reading is regulators will play a decisive role in pushing the ad industry’s trigger — and force through much-needed change in their targeting behavior.

“Unless the entire industry moves together, no one can be the first to remove personal data from bid requests but if the regulators step in in a big way… and say you’re all going to go out of business if you keep putting personal data into bid requests then everyone will come together — like the music industry was forced to eventually, under Steve Jobs,” he argues. “Everyone can together decide on a new short term disadvantageous but long term highly advantageous change.”

Of course such a radical reshaping is not going to happen overnight. Regulatory triggers tend to be slow motion unfoldings at the best of times. You also have to factor in the inexorable legal challenges.

But look closely and you’ll see both momentum massing behind privacy — and regulatory writing on the wall.

“Are we going to see programmatic forced to be non-personal and therefore better for every single citizen of the world (except, say, if they work for a data broker),” adds Ryan, posing his own concluding question. “Will that massive change, which will help society and the web… will that change happen before Christmas? No. But it’s worth working on. And it’s going to take some time.

“It could be two years from now that we have the finality. But a finality there will be. Detroit was only able to fight against regulation for so long. It does come.”

Who’d have though “taking back control” could ever sound so good?


Source: The Tech Crunch

Read More

EU parliament calls for Privacy Shield to be pulled until US complies

Posted by on Jul 5, 2018 in cloud act, data protection, EU parliament, Europe, GDPR, Government, Lawsuit, mass surveillance, Policy, Privacy, Privacy Shield, Security, TC | 1 comment

The European Parliament has been making its presence felt today. As well as reopening democratic debate around a controversial digital copyright reform proposal by voting against it being fast-tracked, MEPs have adopted a resolution calling for the suspension of the EU-US Privacy Shield.

The parliamentarians’ view is that the data transfer mechanism does not provide the necessary ‘essentially equivalent’ data protection for EU citizens — and should therefore be suspended until US authorities come into compliance.

The resolution states that the parliament:

Takes the view that the current Privacy Shield arrangement does not provide the adequate level of protection required by Union data protection law and the EU Charter as interpreted by the European Court of Justice;

Considers that, unless the US is fully compliant by 1 September 2018, the Commission has failed to act in accordance with Article 45(5) GDPR; calls therefore on the Commission to suspend the Privacy Shield until the US authorities comply with its terms

The mechanism is currently used by more than 3,300 organizations to authorize transfers of personal data from the EU to the US, including the likes of Facebook, Google, Microsoft, Amazon and Twitter, to name just a few of the well-known tech names making use of the framework to authorize EU to US personal data transfers.

The EU-US Privacy Shield is not yet two years old but has always been controversial, given the mass surveillance/Snowden disclosure-related reasons for the demise of its predecessor (Safe Harbor).

Privacy Shield has looked especially precarious since the election of a US president with an openly privacy-hostile, anti-foreigner agenda. And reforms to US laws that EU lawmakers had hoped would be enacted have not come to pass.

On the contrary, US lawmakers dug in entirely on warrantless surveillance (aka Section 702 of the Foreign Intelligence Surveillance Act), giving it six more years — and offering nothing in the way of the sought for reforms.

In today’s resolution the parliament writes that it “regrets that the US did not seize the opportunity of the recent reauthorisation of FISA Section 702 to include the safeguards provided in PPD 28” — referring to an Obama era Presidential Policy Directive that backed extending privacy protections to non-US nationals (when a very different US president wrote that US signals intelligence activities “must take into account that all persons should be treated with dignity and respect, regardless of their nationality or wherever they might reside, and that all persons have legitimate privacy interests in the handling of their personal information”).

EU lawmakers have always wanted a more formal, robust and lasting commitment than a PPD, though, and privacy provisions for foreigners’ data being included in FISA was their preferred outcome. Safe to say, Trump has not picked up that baton.

The parliament is also calling for “evidence and legally binding commitments” to ensure that data collection under FISA Section 702 is not “indiscriminate and access is not conducted on a generalised basis (bulk collection)” — which would be in contravention of the EU’s Charter on Fundamental Rights.

Specifically it’s backing calls by the EU’s influential WP29 group, which is comprised of Member State data protection chiefs (aka what’s now known as the European Data Protection Board; EDPB) for an updated report from its rather less influential US counterpart, the Privacy and Civil Liberties Oversight Board (which still only has one active board member listed on its website; yet another bone of contention for Privacy Shield compliance) to provide definition and detail on how US intelligence agencies are actually handling ‘bulk data’.

The parliament writes that it wants the PCLOB to report on “the definition of ‘targets’, on the ‘tasking of selectors’ and on the concrete process of applying the selectors in the context of the UPSTREAM [aka the NSA’s Internet and telephone data collection program] to clarify and assess whether bulk access to personal data occurs in that context”.

The parliament is also angry that EU individuals have been excluded from additional protection provided by the reauthorisation of FISA Section 702 — saying it contains “several amendments that are merely procedural and do not address the most problematic issues” — with MEPs amping up pressure on the Commission, urging the EU’s executive body to “take the forthcoming WP29 analysis on FISA Section 702 seriously and to act accordingly”.

Privacy Shield was only officially adopted in July 2016, but EU lawmakers have been getting increasingly unhappy because core components of the framework have been left hanging by US authorities. Such as the ongoing lack of a permanent appointment to an ombudsperson role that’s intended to act as a key arbiter for any data-related complaints from EU citizens, given the data controllers in question are in the US.

The parliament also raises concerns about the executive order signed by Trump in January 2017 — aka the ‘Enhancing Public Safety’ order, which stripped away privacy protections from non-U.S. citizens — saying that while Privacy Shield did not directly rest on the US Privacy Act related to this order, the substance of the order  indicates “the intention of the US executive to reverse the data protection guarantees previously granted to EU citizens and to override the commitments made towards the EU during the Obama Presidency”.

So, as we wrote at the time, the trajectory of Trump’s administration vis-a-vis privacy and foreigners did not — and does not — bode well for smooth data flows between the two regions; aka the lifeblood of business — not just tech business.

It’s also unhappy about the recent adoption of the Clarifying Lawful Overseas Use of Data Act (aka the Cloud Act), writing that this “expands the abilities of American and foreign law enforcement to target and access people’s data across international borders without making use of the mutual legal assistance (MLAT) instruments, which provide for appropriate safeguards and respect the judicial competences of the countries where the information is located”.

“The Cloud Act could have serious implications for the EU as it is far-reaching and creates a potential conflict with the EU data protection laws,” it adds — saying a more balanced solution would have been to strengthen the existing international system of MLATs “with a view to encouraging international and judicial cooperation”.

And, well, you can’t imagine treaty-ripping Trump getting cosy with that idea.

Pressure has especially stepped up on Privacy Shield in recent months, ahead of the mechanism’s second annual review — which is due to take place in October — as the review process should, in theory, provide some leverage for the EU over its US counterparts, as the Commission can hold up the threat of suspension for compliance failures.

Although, once the EC declares the annual review has passed, the lever arguably flips the other way — and Privacy Shield seemingly gets another year’s grace, with critics fobbed off with talk of ‘improvements to be made’, as happened at the first annual review last year.

Hence why EU parliamentarians are amping up the pressure now, ahead of the review, much like  the WP29 did last year.

The Libe committee also called for a suspension last month, raising pointed concerns about the adequacies of protection around EU citizens’ data in light of the Facebook-Cambridge Analytica data misuse scandal. Europeans’ data was among the up to 87M compromised accounts related to that scandal. Though there have been many other recently emerging instances of Facebook failing to lock down user data.

The company remains an active participant in the EU-US Privacy Shield framework, although it is now under investigation by the FTC — as a consequence of the Cambridge Analytica scandal. Several other federal agencies are also reportedly examining related statements Facebook has made. So it’s facing rising heat. Even as it remains listed as an active participant in Privacy Shield for now.

Any sanction or removal from the framework depends on US authorities judging an entity to have breached its obligations under the framework — and taking action.

Notably SCL Elections — a US subsidiary of the now defunct Cambridge Analytica — is now listed as inactive (it was still active just under a month ago).

The continued presence of any entity on the Privacy Shield list that has demonstrably failed to safeguard EU citizens’ personal data must raise serious questions over how much actual protection the framework affords.

In a statement on the parliament resolution today, Libe committee chair and rapporteur Claude Moraes said: “This resolution makes clear that the Privacy Shield in its current form does not provide the adequate level of protection required by EU data protection law and the EU Charter. Progress has been made to improve on the Safe Harbor agreement but this is insufficient to ensure the legal certainty required for the transfer of personal data.

“In the wake of data breaches like the Facebook and Cambridge Analytica scandal, it is more important than ever to protect our fundamental right to data protection and to ensure consumer trust. The law is clear and, as set out in the GDPR, if the agreement is not adequate, and if the US authorities fail to comply with its terms, then it must be suspended until they do.”

Suspending the mechanism entirely would certainly concentrate minds in the US administration — given the thousands of US companies signed up to rely on it simplifying their business operations.

Were that to happen, many of these companies would be left scrambling to put in place alternative legal arrangements to authorize data transfers — or even have to suspend data flows altogether, depending on their threshold for legal risk. (Remember the EU also now has a tough new data protection framework.)

However only the European Commission can suspend the Privacy Shield mechanism itself.

And the Commission continues to stand behind the framework it worked with the US to shape and negotiate. Christian Wigand, a Commission spokesperson, told us it intends to continue to work with the US administration on improving the implementation of Privacy Shield.

In a statement he said:

The Commission takes note of the European Parliament resolution on the EU- U.S. Privacy Shield. The Commission’s position is clear and laid out in the first annual review report. The first review showed that the Privacy Shield works well, but there is some room for improving its implementation.

The Commission is working with the US administration and expects them to address the EU concerns. Commissioner Jourová was in the U.S. last time in March to engage with the U.S. government on the follow-up and discussed what the U.S. side should do until the next annual review in October.

Commissioner Jourová also sent letters to US State Secretary Pompeo, Commerce Secretary Ross and Attorney General Sessions urging them to do the necessary improvements, including on the Ombudsman, as soon as possible.

We will continue to work to keep the Privacy Shield running and ensure European’s data are well protected. Around 4,000 companies are using it currently.

There’s a wild card here too though: Privacy Shield is now facing serious legal questions in Europe, having been looped into what began as a separate legal challenge to another data transfer mechanism — used by the likes of Facebook — to authorize transfers of EU users’ personal data to the US for processing.

That case recently resulted in a referral of various legal questions, including around Privacy Shield, to Europe’s top court — thereby posing what could be an existential threat to the whole arrangement. (Though Facebook is attempting to derail the referral, and has an appeal against set to be heard in Ireland’s Supreme Court later this month.)

While the Commission has a vested interest in defending and maintaining a framework it renegotiated so very recently, and which it can trumpet as as success given the number of businesses that have jumped on board, the CJEU will be looking at Privacy Shield’s adequacy protections purely from the legal perspective — and, as happened with Safe Harbor in 2015, the court could decide the mechanism is legally unsound and strike it down at the stroke of a pen.

At which point the scrambling and renegotiating would begin all over again.

In its second plenary meeting today, the EDPB notes that Privacy Shield was among the topics discussed. The group says it also met with the acting US ombudsperson responsible for handling national security complaints under the Privacy Shield, ambassador Judith Garber (who, nonetheless, is not a permanent appointee).

In a statement released after the plenary, it writes that the meeting with Garber was “interesting and collegial” but did not provide a conclusive answer to its ongoing concerns, including around the ombudsperson role; the lack of formal appointments to the PCLOB; the lack of additional information on the ombudsperson mechanism; and further declassification of the procedural rules, in particular on how the ombudsperson interacts with the intelligence services.

“These issues will remain on top of the agenda during the second annual review,” it writes. “In addition, it calls for supplementary evidence to be given by the US authorities in order to address these concerns. Finally, the EDPB notes that the same concerns will be addressed by the European Court of Justice in cases that are already pending, and to which the EDPB offers to contribute its view, if invited by the CJEU.”


Source: The Tech Crunch

Read More

AI spots legal problems with tech T&Cs in GDPR research project

Posted by on Jul 4, 2018 in Airbnb, Amazon, Apple, Artificial Intelligence, data protection, data security, epic games, Europe, European Union, Facebook, GDPR, General Data Protection Regulation, Google, instagram, law, Machine Learning Technology, Microsoft, Netflix, personally identifiable information, Privacy, privacy policy, Skyscanner, Steam, TC, tcs, terms of service, Uber, WhatsApp | 0 comments

Technology is the proverbial double-edged sword. And an experimental European research project is ensuring this axiom cuts very close to the industry’s bone indeed by applying machine learning technology to critically sift big tech’s privacy policies — to see whether AI can automatically identify violations of data protection law.

The still-in-training privacy policy and contract parsing tool — which is called ‘Claudette‘: Aka (automated) clause detector — is being developed by researchers at the European University Institute in Florence.

They’ve also now got support from European consumer organization BEUC — for a ‘Claudette meets GDPR‘ project — which specifically applies the tool to evaluate compliance with the EU’s General Data Protection Regulation.

Early results from this project have been released today, with BEUC saying the AI was able to automatically flag a range of problems with the language being used in tech T&Cs.

The researchers set Claudette to work analyzing the privacy policies of 14 companies in all — namely: Google, Facebook (and Instagram), Amazon, Apple, Microsoft, WhatsApp, Twitter, Uber, AirBnB, Booking, Skyscanner, Netflix, Steam and Epic Games — saying this group was selected to cover a range of online services and sectors.

And also because they are among the biggest online players and — I quote — “should be setting a good example for the market to follow”. Ehem, should.

The AI analysis of the policies was carried out in June, after the update to the EU’s data protection rules had come into force. The regulation tightens requirements on obtaining consent for processing citizens’ personal data by, for example, increasing transparency requirements — basically requiring that privacy policies be written in clear and intelligible language, explaining exactly how the data will be used, in order that people can make a genuine, informed choice to consent (or not consent).

In theory, all 15 parsed privacy policies should have been compliant with GDPR by June, as it came into force on May 25. However some tech giants are already facing legal challenges to their interpretation of ‘consent’. And it’s fair to say the law has not vanquished the tech industry’s fuzzy language and logic overnight. Where user privacy is concerned, old, ugly habits die hard, clearly.

But that’s where BEUC is hoping AI technology can help.

It says that out of a combined 3,659 sentences (80,398 words) Claudette marked 401 sentences (11.0%) as containing unclear language, and 1,240 (33.9%) containing “potentially problematic” clauses or clauses providing “insufficient” information.

BEUC says identified problems include:

  • Not providing all the information which is required under the GDPR’s transparency obligations. “For example companies do not always inform users properly regarding the third parties with whom they share or get data from”
  • Processing of personal data not happening according to GDPR requirements. “For instance, a clause stating that the user agrees to the company’s privacy policy by simply using its website”
  • Policies are formulated using vague and unclear language (i.e. using language qualifiers that really bring the fuzz — such as “may”, “might”, “some”, “often”, and “possible”) — “which makes it very hard for consumers to understand the actual content of the policy and how their data is used in practice”

The bolstering of the EU’s privacy rules, with GDPR tightening the consent screw and supersizing penalties for violations, was exactly intended to prevent this kind of stuff. So it’s pretty depressing — though hardly surprising — to see the same, ugly T&C tricks continuing to be used to try to sneak consent by keeping users in the dark.

We reached out to two of the largest tech giants whose policies Claudette parsed — Google and Facebook — to ask if they want to comment on the project or its findings.

A Google spokesperson said: “We have updated our Privacy Policy in line with the requirements of the GDPR, providing more detail on our practices and describing the information that we collect and use, and the controls that users have, in clear and plain language. We’ve also added new graphics and video explanations, structured the Policy so that users can explore it more easily, and embedded controls to allow users to access relevant privacy settings directly.”

At the time of writing Facebook had not responded to our request for comment.

Commenting in a statement, Monique Goyens, BEUC’s director general, said: “A little over a month after the GDPR became applicable, many privacy policies may not meet the standard of the law. This is very concerning. It is key that enforcement authorities take a close look at this.”

The group says it will be sharing the research with EU data protection authorities, including the European Data Protection Board. And is not itself ruling out bringing legal actions against law benders.

But it’s also hopeful that automation will — over the longer term — help civil society keep big tech in legal check.

Although, where this project is concerned, it also notes that the training data-set was small — conceding that Claudette’s results were not 100% accurate — and says more privacy policies would need to be manually analyzed before policy analysis can be fully conducted by machines alone.

So file this one under ‘promising research’.

“This innovative research demonstrates that just as Artificial Intelligence and automated decision-making will be the future for companies from all kinds of sectors, AI can also be used to keep companies in check and ensure people’s rights are respected,” adds Goyens. “We are confident AI will be an asset for consumer groups to monitor the market and ensure infringements do not go unnoticed.

“We expect companies to respect consumers’ privacy and the new data protection rights. In the future, Artificial Intelligence will help identify infringements quickly and on a massive scale, making it easier to start legal actions as a result.”

For more on the AI-fueled future of legal tech, check out our recent interview with Mireille Hildebrandt.


Source: The Tech Crunch

Read More

Audit of NHS Trust’s app project with DeepMind raises more questions than it answers

Posted by on Jun 13, 2018 in Apps, Artificial Intelligence, data management, data protection, deep learning, DeepMind, Europe, Google, Health, law, machine learning, MedConfidential, National Health Service, NHS, Privacy, Streams app, United Kingdom | 4 comments

A third party audit of a controversial patient data-sharing arrangement between a London NHS Trust and Google DeepMind appears to have skirted over the core issues that generated the controversy in the first place.

The audit (full report here) — conducted by law firm Linklaters — of the Royal Free NHS Foundation Trust’s acute kidney injury detection app system, Streams, which was co-developed with Google-DeepMind (using an existing NHS algorithm for early detection of the condition), does not examine the problematic 2015 information-sharing agreement inked between the pair which allowed data to start flowing.

“This Report contains an assessment of the data protection and confidentiality issues associated with the data protection arrangements between the Royal Free and DeepMind . It is limited to the current use of Streams, and any further development, functional testing or clinical testing, that is either planned or in progress. It is not a historical review,” writes Linklaters, adding that: “It includes consideration as to whether the transparency, fair processing, proportionality and information sharing concerns outlined in the Undertakings are being met.”

Yet it was the original 2015 contract that triggered the controversy, after it was obtained and published by New Scientist, with the wide-ranging document raising questions over the broad scope of the data transfer; the legal bases for patients information to be shared; and leading to questions over whether regulatory processes intended to safeguard patients and patient data had been sidelined by the two main parties involved in the project.

In November 2016 the pair scrapped and replaced the initial five-year contract with a different one — which put in place additional information governance steps.

They also went on to roll out the Streams app for use on patients in multiple NHS hospitals — despite the UK’s data protection regulator, the ICO, having instigated an investigation into the original data-sharing arrangement.

And just over a year ago the ICO concluded that the Royal Free NHS Foundation Trust had failed to comply with Data Protection Law in its dealings with Google’s DeepMind.

The audit of the Streams project was a requirement of the ICO.

Though, notably, the regulator has not endorsed Linklaters report. On the contrary, it warns that it’s seeking legal advice and could take further action.

In a statement on its website, the ICO’s deputy commissioner for policy, Steve Wood, writes: “We cannot endorse a report from a third party audit but we have provided feedback to the Royal Free. We also reserve our position in relation to their position on medical confidentiality and the equitable duty of confidence. We are seeking legal advice on this issue and may require further action.”

In a section of the report listing exclusions, Linklaters confirms the audit does not consider: “The data protection and confidentiality issues associated with the processing of personal data about the clinicians at the Royal Free using the Streams App.”

So essentially the core controversy, related to the legal basis for the Royal Free to pass personally identifiable information on 1.6M patients to DeepMind when the app was being developed, and without people’s knowledge or consent, is going unaddressed here.

And Wood’s statement pointedly reiterates that the ICO’s investigation “found a number of shortcomings in the way patient records were shared for this trial”.

“[P]art of the undertaking committed Royal Free to commission a third party audit. They have now done this and shared the results with the ICO. What’s important now is that they use the findings to address the compliance issues addressed in the audit swiftly and robustly. We’ll be continuing to liaise with them in the coming months to ensure this is happening,” he adds.

“It’s important that other NHS Trusts considering using similar new technologies pay regard to the recommendations we gave to Royal Free, and ensure data protection risks are fully addressed using a Data Protection Impact Assessment before deployment.”

While the report is something of a frustration, given the glaring historical omissions, it does raise some points of interest — including suggesting that the Royal Free should probably scrap a Memorandum of Understanding it also inked with DeepMind, in which the pair set out their ambition to apply AI to NHS data.

This is recommended because the pair have apparently abandoned their AI research plans.

On this Linklaters writes: “DeepMind has informed us that they have abandoned their potential research project into the use of AI to develop better algorithms, and their processing is limited to execution of the NHS AKI algorithm… In addition, the majority of the provisions in the Memorandum of Understanding are non-binding. The limited provisions that are binding are superseded by the Services Agreement and the Information Processing Agreement discussed above, hence we think the Memorandum of Understanding has very limited relevance to Streams. We recommend that the Royal Free considers if the Memorandum of Understanding continues to be relevant to its relationship with DeepMind and, if it is not relevant, terminates that agreement.”

In another section, discussing the NHS algorithm that underpins the Streams app, the law firm also points out that DeepMind’s role in the project is little more than helping provide a glorified app wrapper (on the app design front the project also utilized UK app studio, ustwo, so DeepMind can’t claim app design credit either).

“Without intending any disrespect to DeepMind, we do not think the concepts underpinning Streams are particularly ground-breaking. It does not, by any measure, involve artificial intelligence or machine learning or other advanced technology. The benefits of the Streams App instead come from a very well-designed and user-friendly interface, backed up by solid infrastructure and data management that provides AKI alerts and contextual clinical information in a reliable, timely and secure manner,” Linklaters writes.

What DeepMind did bring to the project, and to its other NHS collaborations, is money and resources — providing its development resources free for the NHS at the point of use, and stating (when asked about its business model) that it would determine how much to charge the NHS for these app ‘innovations’ later.

Yet the commercial services the tech giant is providing to what are public sector organizations do not appear to have been put out to open tender.

Also notably excluded in the Linklaters’ audit: Any scrutiny of the project vis-a-vis competition law, public procurement law compliance with procurement rules, and any concerns relating to possible anticompetitive behavior.

The report does highlight one potentially problematic data retention issue for the current deployment of Streams, saying there is “currently no retention period for patient information on Streams” — meaning there is no process for deleting a patient’s medical history once it reaches a certain age.

“This means the information on Streams currently dates back eight years,” it notes, suggesting the Royal Free should probably set an upper age limit on the age of information contained in the system.

While Linklaters largely glosses over the chequered origins of the Streams project, the law firm does make a point of agreeing with the ICO that the original privacy impact assessment for the project “should have been completed in a more timely manner”.

It also describes it as “relatively thin given the scale of the project”.

Giving its response to the audit, health data privacy advocacy group MedConfidential — an early critic of the DeepMind data-sharing arrangement — is roundly unimpressed, writing: “The biggest question raised by the Information Commissioner and the National Data Guardian appears to be missing — instead, the report excludes a “historical review of issues arising prior to the date of our appointment”.

“The report claims the ‘vital interests’ (i.e. remaining alive) of patients is justification to protect against an “event [that] might only occur in the future or not occur at all”… The only ‘vital interest’ protected here is Google’s, and its desire to hoard medical records it was told were unlawfully collected. The vital interests of a hypothetical patient are not vital interests of an actual data subject (and the GDPR tests are demonstrably unmet).

“The ICO and NDG asked the Royal Free to justify the collection of 1.6 million patient records, and this legal opinion explicitly provides no answer to that question.”


Source: The Tech Crunch

Read More