Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Targeted ads offer little extra value for online publishers, study suggests

Posted by on May 31, 2019 in Adtech, Advertising Tech, Alphabet, behavioral advertising, digital advertising, digital marketing, display advertising, Europe, Facebook, General Data Protection Regulation, IAB, Marketing, Media, Online Advertising, Privacy, programmatic advertising, Randall Rothenberg, Richard blumenthal, targeted advertising, United States | 0 comments

How much value do online publishers derive from behaviorally targeted advertising that uses privacy-hostile tracking technologies to determine which advert to show a website user?

A new piece of research suggests publishers make just 4% more vs if they were to serve a non-targeted ad.

It’s a finding that sheds suggestive light on why so many newsroom budgets are shrinking and journalists finding themselves out of work — even as adtech giants continue stuffing their coffers with massive profits.

Visit the average news website lousy with third party cookies (yes, we know, it’s true of TC too) and you’d be forgiven for thinking the publisher is also getting fat profits from the data creamed off their users as they plug into programmatic ad systems that trade info on Internet users’ browsing habits to determine the ad which gets displayed.

Yet while the online ad market is massive and growing — $88BN in revenues in the US in 2017, per IAB data, a 21% year-on-year increase — publishers are not the entities getting filthy rich off of their own content.

On the contrary, research in recent years has suggested that a large proportion of publishers are being squeezed by digital display advertising economics, with some 40% reporting either stagnant or shrinking ad revenue, per a 2015 Econsultancy study. (Hence, we can posit, the rise in publishers branching into subscriptions — TC’s own offering can be found here: Extra Crunch).

The lion’s share of value being created by digital advertising ends up in the coffers of adtech giants, Google and Facebook . Aka the adtech duopoly. In the US, the pair account for around 60% of digital ad market spending, per eMarketer — or circa $76.57BN.

Their annual revenues have mirrored overall growth in digital ad spend — rising from $74.9BN to $136.8BN, between 2015 and 2018, in the case of Google’s parent Alphabet; and $17.9BN to $55.8BN for Facebook. (While US online ad spend stepped up from $59.6BN to $107.5BN+ between 2015 and 2018.)

eMarketer projects 2019 will mark the first decline in the duopoly’s collective share. But not because publishers’ fortunes are suddenly set for a bonanza turnaround. Rather another tech giant — Amazon — has been growing its share of the digital ad market, and is expected to make what eMarketer dubs the start of “a small dent in the duopoly”.

Behavioral advertising — aka targeted ads — has come to dominate the online ad market, fuelled by platform dynamics encouraging a proliferation of tracking technologies and techniques in the unregulated background. And by, it seems, greater effectiveness from the perspective of online advertisers, as the paper notes. (“Despite measurement and attribution challenges… many studies seem to concur that targeted advertising is beneficial and effective for advertising firms.”

This has had the effect of squeezing out non-targeted display ads, such as those that rely on contextual factors to select the ad — e.g. the content being viewed, device type or location.

The latter are now the exception; a fall-back such as for when cookies have been blocked. (Albeit, one that veteran pro-privacy search engine, DuckDuckGo, has nonetheless turned into a profitable contextual ad business).

One 2017 study by IHS Markit, suggested that 86% of programmatic advertising in Europe was using behavioural data. While even a quarter (24%) of non-programmatic advertising was found to be using behavioural data, per its model. 

“In 2016, 90% of the digital display advertising market growth came from formats and processes that use behavioural data,” it observed, projecting growth of 106% for behaviourally targeted advertising between 2016 and 2020, and a decline of 63.6% for forms of digital advertising that don’t use such data.

The economic incentives to push behavioral advertising vs non-targeted ads look clear for dominant platforms that rely on amassing scale — across advertisers, other people’s eyeballs, content and behavioral data — to extract value from the Internet’s dispersed and diverse audience.

But the incentives for content producers to subject themselves — and their engaged communities of users — to these privacy-hostile economies of scale look a whole lot more fuzzy.

Concern about potential imbalances in the online ad market is also leading policymakers and regulators on both sides of the Atlantic to question the opacity of the market — and call for greater transparency.

A price on people tracking’s head

The new research, which will be presented at the Workshop on the Economics of Information Security conference in Boston next week, aims to contribute a new piece to this digital ad revenue puzzle by trying to quantify the value to a single publisher of choosing ads that are behaviorally targeted vs those that aren’t.

We’ve flagged the research before — when the findings were cited by one of the academics involved in the study at an FTC hearing — but the full paper has now been published.

It’s called Online Tracking and Publishers’ Revenues: An Empirical Analysis, and is co-authored by three academics: Veronica Marotta, an assistant professor in information and decision sciences at the Carlson School of Management, University of Minnesota; Vibhanshu Abhishek, associate professor of information systems at the Paul Merage School of Business, University California Irvine; and Alessandro Acquisti, professor of IT and public policy at Carnegie Mellon University.

“While the impact of targeted advertising on advertisers’ campaign effectiveness has been vastly documented, much less is known about the value generated by online tracking and targeting technologies for publishers – the websites that sell ad spaces,” the researchers write. “In fact, the conventional wisdom that publishers benefit too from behaviorally targeted advertising has rarely been scrutinized in academic studies.”

“As we briefly mention in the paper, notwithstanding claims about the shared benefits of online tracking and behaviorally targeting for multiple stakeholders (merchants, publishers, consumers, intermediaries…), there is a surprising paucity of empirical estimates of economic outcomes from independent researchers,”  Acquisti also tells us.

In fact, most of the estimates focus on the advertisers’ side of the market (for instance, there have been quite a few studies estimating the increase in click-through or conversion rates associated with targeted ads); much less is known about the publishers’ side of the market. So, going into the study, we were genuinely curious about what we may find, as there was little in terms of data that could anchor our predictions.

“We did have theoretical bases to make possible predictions, but those predictions could be quite antithetical. Under one story, targeting increases the value of the audience, which increases advertisers’ bids, which increases publishers’ revenues; under a different story, targeting decreases the ‘pool’ of audience interested in an ad, which decreases competition to display ads, which reduces advertisers’ bids, eventually reducing publishers’ revenues.”

For the study the researchers were provided with a data-set comprising “millions” of display ad transactions completed in a week across multiple online outlets owned by a single (unidentified) large publisher which operates websites in a range of verticals such as news, entertainment and fashion.

The data-set also included whether or not the site visitor’s cookie ID is available — enabling analysis of the price difference between behaviorally targeted and non-targeted ads. (The researchers used a statistical mechanism to control for systematic differences between users who impede cookies.)

As noted above, the top-line finding is only a very small gain for the publisher whose data they were analyzing — of around 4%. Or an average increase of $0.00008 per advertisement. 

It’s a finding that contrasts wildly with some of the loud yet unsubstantiated opinions which can be found being promulgated online — claiming the ‘vital necessity’ of behavorial ads to support publishers/journalism.

For example, this article, published earlier this month by a freelance journalist writing for The American Prospect, includes the claim that: “An online advertisement without a third-party cookie sells for just 2 percent of the cost of the same ad with the cookie.” Yet does not specify a source for the statistic it cites.

(The author told us the reference is to a 2018 speech made by Index Exchange’s Andrew Casale, when he suggested ad requests without a buyer ID receive 99% lower bids vs the same ad request with the identifier. She added that her conversations with people in the adtech industry had suggested a spread between a 99% and 97% decline in the value of an ad without a cookie, hence choosing a middle point.)

At the same time policymakers in the US now appear painfully aware how far behind Europe they are lagging where privacy regulation is concerned — and are fast dialling up their scrutiny of and verbal horror over how Internet users are tracked and profiled by adtech giants.

At a Senate Judiciary Committee hearing earlier this month — convened with the aim of “understanding the digital ad ecosystem and the impact of data privacy and competition policy” — the talk was not if to regulate big tech but how hard they must crack down on monopolistic ad giants.

“That’s what brings us here today. The lack of choice [for consumers to preserve their privacy online],” said senator Richard Blumenthal. “The excessive and extraordinary power of Google and Facebook and others who dominate the market is a fact of life. And so privacy protection is absolutely vital in the short run.”

The kind of “invasive surveillance” that the adtech industry systematically deploys is “something we would never tolerate from a government but Facebook and Google have the power of government never envisaged by our founders,” Blumenthal went on, before a few of the types of personal data that are sucked up and exploited by the adtech industrial surveillance complex: “Health, dating, location, finance, extremely personal details — offered to anyone with almost no restraint.”

Bearing that “invasive surveillance” in mind, a 4% publisher ‘premium’ for privacy-hostile ads vs adverts that are merely contextually served (and so don’t require pervasive tracking of web users) starts to look like a massive rip off — of both publisher brand and audience value, as well as Internet users’ rights and privacy.

Yes, targeted ads do appear to generate a small revenue increase, per the study. But as the researchers also point out that needs to be offset against the cost to publishers of complying with privacy regulations.

“If setting tracking cookies on visitors was cost free, the website would definitely be losing money. However, the widespread use of tracking cookies – and, more broadly, the practice of tracking users online – has been raising privacy concerns that have led to the adoption of stringent regulations, in particular in the European Union,” they write — going on to cite an estimate by the International Association of Privacy Professionals that Fortune’s Global 500 companies will spend around $7.8BN on compliant costs to meet the requirements of Europe’s General Data Protection Regulation (GDPR). 

Wider costs to systematically eroding online privacy are harder to put a value on for publishers. But should also be considered — whether it’s the costs to a brand reputation and user loyalty as a result of a publisher larding their sites with unwanted trackers; to wider societal costs — linked to the risks of data-fuelled manipulation and exploitation of vulnerable groups. Simply put, it’s not a good look.

Publishers may appear complicit in the asset stripping of their own content and audiences for what — per this study — seems only marginal gain, but the opacity of the adtech industry implies that most likely don’t realize exactly what kind of ‘deal’ they’re getting at the hands of the ad giants who grip them.

Which makes this research paper a very compelling read for the online publishing industry… and, well, a pretty awkward newsflash for anyone working in adtech.

 

While the study only provides a snapshot of ad market economics, as experienced by a single publisher, the glimpse it presents is distinctly different from the picture the adtech lobby has sought to paint, as it has ploughed money into arguing against privacy legislation — on the claimed grounds that ‘killing behavioural advertising would kill free online content’. 

Saying no more creepy ads might only marginally reduce publishers’ revenue doesn’t have quite the same doom-laden ring, clearly.

“In a nutshell, this study provides an initial data point on a portion of the advertising ecosystem over which claims had been made but little empirical verification was completed. The results highlight the need for more transparency over how the value generated by flows of data gets allocated to different stakeholders,” says Acquisti, summing up how the study should be read against the ad market as a whole.

Contacted for a response to the research, Randall Rothenberg, CEO of advertising business organization, the IAB, agreed that the digital supply chain is “too complex and too opaque” — and also expressed concern about how relatively little value generated by targeted ads is trickling down to publishers.

“One week’s worth of data from one unidentified publisher does not make for a projectible (sic) piece of research. Still, the study shows that targeted advertising creates immense value for brands — more than 90% of the unnamed publisher’s auctioned ads were sold with targeting attached, and advertisers were willing to pay a 60% premium for those ads. Yet very little of that value flowed to the publisher,” he told TechCrunch. “As IAB has been saying for a decade, the digital supply chain is too complex and too opaque, and this diversion of value is more proof that transparency is required so that publishers can benefit from the value they create.”

The research paper includes discussion of the limitations to the approach, as well as ideas for additional research work — such as looking at how the value of cookies changes depending on how much information they contain (on that they write of their initial findings: “Information seem to be very valuable (from the publisher’s perspective) when we compare cookies with very little information to cookies with some information; after a certain point, adding more information to a cookie does not seem to create additional value for the publisher”); and investigating how “the (un)availability of a cookie changes the competition in the auction” — to try to understand ad auction competition dynamics and the potential mechanisms at play.

“This is one new and hopefully useful data point, to which others must be added,” Acquisti also told us in concluding remarks. “The key to research work is incremental progress, with more studies progressively adding a clearer understanding of an issue, and we look forward to more research in this area.”

This report was updated with additional comment


Source: The Tech Crunch

Read More

The facts about Facebook

Posted by on Jan 26, 2019 in Adtech, Advertising Tech, Artificial Intelligence, Europe, Facebook, Mark Zuckerberg, Privacy, Security, Social, Social Media, surveillance, TC | 0 comments

This is a critical reading of Facebook founder Mark Zuckerberg’s article in the WSJ on Thursday, also entitled The Facts About Facebook

Yes Mark, you’re right; Facebook turns 15 next month. What a long time you’ve been in the social media business! We’re curious as to whether you’ve also been keeping count of how many times you’ve been forced to apologize for breaching people’s trust or, well, otherwise royally messing up over the years.

It’s also true you weren’t setting out to build “a global company”. The predecessor to Facebook was a ‘hot or not’ game called ‘FaceMash’ that you hacked together while drinking beer in your Harvard dormroom. Your late night brainwave was to get fellow students to rate each others’ attractiveness — and you weren’t at all put off by not being in possession of the necessary photo data to do this. You just took it; hacking into the college’s online facebooks and grabbing people’s selfies without permission.

Blogging about what you were doing as you did it, you wrote: “I almost want to put some of these faces next to pictures of some farm animals and have people vote on which is more attractive.” Just in case there was any doubt as to the ugly nature of your intention. 

The seeds of Facebook’s global business were thus sown in a crude and consentless game of clickbait whose idea titillated you so much you thought nothing of breaching security, privacy, copyright and decency norms just to grab a few eyeballs.

So while you may not have instantly understood how potent this ‘outrageous and divisive’ eyeball-grabbing content tactic would turn out to be — oh hai future global scale! — the core DNA of Facebook’s business sits in that frat boy discovery where your eureka Internet moment was finding you could win the attention jackpot by pitting people against each other.

Pretty quickly you also realized you could exploit and commercialize human one-upmanship — gotta catch em all friend lists! popularity poke wars! — and stick a badge on the resulting activity, dubbing it ‘social’.

FaceMash was antisocial, though. And the unpleasant flipside that can clearly flow from ‘social’ platforms is something you continue not being nearly honest nor open enough about. Whether it’s political disinformation, hate speech or bullying, the individual and societal impacts of maliciously minded content shared and amplified using massively mainstream tools you control is now impossible to ignore.

Yet you prefer to play down these human impacts; as a “crazy idea”, or by implying that ‘a little’ amplified human nastiness is the necessary cost of being in the big multinational business of connecting everyone and ‘socializing’ everything.

But did you ask the father of 14-year-old Molly Russell, a British schoolgirl who took her own life in 2017, whether he’s okay with your growth vs controls trade-off? “I have no doubt that Instagram helped kill my daughter,” said Russell in an interview with the BBC this week.

After her death, Molly’s parents found she had been following accounts on Instagram that were sharing graphic material related to self-harming and suicide, including some accounts that actively encourage people to cut themselves. “We didn’t know that anything like that could possibly exist on a platform like Instagram,” said Russell.

Without a human editor in the mix, your algorithmic recommendations are blind to risk and suffering. Built for global scale, they get on with the expansionist goal of maximizing clicks and views by serving more of the same sticky stuff. And more extreme versions of things users show an interest in to keep the eyeballs engaged.

So when you write about making services that “billions” of “people around the world love and use” forgive us for thinking that sounds horribly glib. The scales of suffering don’t sum like that. If your entertainment product has whipped up genocide anywhere in the world — as the UN said Facebook did in Myanmar — it’s failing regardless of the proportion of users who are having their time pleasantly wasted on and by Facebook.

And if your algorithms can’t incorporate basic checks and safeguards so they don’t accidentally encourage vulnerable teens to commit suicide you really don’t deserve to be in any consumer-facing business at all.

Yet your article shows no sign you’ve been reflecting on the kinds of human tragedies that don’t just play out on your platform but can be an emergent property of your targeting algorithms.

You focus instead on what you call “clear benefits to this business model”.

The benefits to Facebook’s business are certainly clear. You have the billions in quarterly revenue to stand that up. But what about the costs to the rest of us? Human costs are harder to quantify but you don’t even sound like you’re trying.

You do write that you’ve heard “many questions” about Facebook’s business model. Which is most certainly true but once again you’re playing down the level of political and societal concern about how your platform operates (and how you operate your platform) — deflecting and reframing what Facebook is to cast your ad business a form of quasi philanthropy; a comfortable discussion topic and self-serving idea you’d much prefer we were all sold on.

It’s also hard to shake the feeling that your phrasing at this point is intended as a bit of an in-joke for Facebook staffers — to smirk at the ‘dumb politicians’ who don’t even know how Facebook makes money.

Y’know, like you smirked…

Then you write that you want to explain how Facebook operates. But, thing is, you don’t explain — you distract, deflect, equivocate and mislead, which has been your business’ strategy through many months of scandal (that and worst tactics — such as paying a PR firm that used oppo research tactics to discredit Facebook critics with smears).

Dodging is another special power; such as how you dodged repeat requests from international parliamentarians to be held accountable for major data misuse and security breaches.

The Zuckerberg ‘open letter’ mansplain, which typically runs to thousands of blame-shifting words, is another standard issue production from the Facebook reputation crisis management toolbox.

And here you are again, ironically enough, mansplaining in a newspaper; an industry that your platform has worked keenly to gut and usurp, hungry to supplant editorially guided journalism with the moral vacuum of algorithmically geared space-filler which, left unchecked, has been shown, time and again, lifting divisive and damaging content into public view.

The latest Zuckerberg screed has nothing new to say. It’s pure spin. We’ve read scores of self-serving Facebook apologias over the years and can confirm Facebook’s founder has made a very tedious art of selling abject failure as some kind of heroic lack of perfection.

But the spin has been going on for far, far too long. Fifteen years, as you remind us. Yet given that hefty record it’s little wonder you’re moved to pen again — imagining that another word blast is all it’ll take for the silly politicians to fall in line.

Thing is, no one is asking Facebook for perfection, Mark. We’re looking for signs that you and your company have a moral compass. Because the opposite appears to be true. (Or as one UK parliamentarian put it to your CTO last year: “I remain to be convinced that your company has integrity”.)

Facebook has scaled to such an unprecedented, global size exactly because it has no editorial values. And you say again now you want to be all things to all men. Put another way that means there’s a moral vacuum sucking away at your platform’s core; a supermassive ethical blackhole that scales ad dollars by the billions because you won’t tie the kind of process knots necessary to treat humans like people, not pairs of eyeballs.

You don’t design against negative consequences or to pro-actively avoid terrible impacts — you let stuff happen and then send in the ‘trust & safety’ team once the damage has been done.

You might call designing against negative consequences a ‘growth bottleneck’; others would say it’s having a conscience.

Everything standing in the way of scaling Facebook’s usage is, under the Zuckerberg regime, collateral damage — hence the old mantra of ‘move fast and break things’ — whether it’s social cohesion, civic values or vulnerable individuals.

This is why it takes a celebrity defamation lawsuit to force your company to dribble a little more resource into doing something about scores of professional scammers paying you to pop their fraudulent schemes in a Facebook “ads” wrapper. (Albeit, you’re only taking some action in the UK in this particular case.)

Funnily enough — though it’s not at all funny and it doesn’t surprise us — Facebook is far slower and patchier when it comes to fixing things it broke.

Of course there will always be people who thrive with a digital megaphone like Facebook thrust in their hand. Scammers being a pertinent example. But the measure of a civilized society is how it protects those who can’t defend themselves from targeted attacks or scams because they lack the protective wrap of privilege. Which means people who aren’t famous. Not public figures like Martin Lewis, the consumer champion who has his own platform and enough financial resources to file a lawsuit to try to make Facebook do something about how its platform supercharges scammers.

Zuckerberg’s slippery call to ‘fight bad content with more content’ — or to fight Facebook-fuelled societal division by shifting even more of the apparatus of civic society onto Facebook — fails entirely to recognize this asymmetry.

And even in the Lewis case, Facebook remains a winner; Lewis dropped his suit and Facebook got to make a big show of signing over £500k worth of ad credit coupons to a consumer charity that will end up giving them right back to Facebook.

The company’s response to problems its platform creates is to look the other way until a trigger point of enough bad publicity gets reached. At which critical point it flips the usual crisis PR switch and sends in a few token clean up teams — who scrub a tiny proportion of terrible content; or take down a tiny number of fake accounts; or indeed make a few token and heavily publicized gestures — before leaning heavily on civil society (and on users) to take the real strain.

You might think Facebook reaching out to respected external institutions is a positive step. A sign of a maturing mindset and a shift towards taking greater responsibility for platform impacts. (And in the case of scam ads in the UK it’s donating £3M in cash and ad credits to a bona fide consumer advice charity.)

But this is still Facebook dumping problems of its making on an already under-resourced and over-worked civic sector at the same time as its platform supersizes their workload.

In recent years the company has also made a big show of getting involved with third party fact checking organizations across various markets — using these independents to stencil in a PR strategy for ‘fighting fake news’ that also entails Facebook offloading the lion’s share of the work. (It’s not paying fact checkers anything, given the clear conflict that would represent it obviously can’t).

So again external organizations are being looped into Facebook’s mess — in this case to try to drain the swamp of fakes being fenced and amplified on its platform — even as the scale of the task remains hopeless, and all sorts of junk continues to flood into and pollute the public sphere.

What’s clear is that none of these organizations has the scale or the resources to fix problems Facebook’s platform creates. Yet it serves Facebook’s purposes to be able to point to them trying.

And all the while Zuckerberg is hard at work fighting to fend off regulation that could force his company to take far more care and spend far more of its own resources (and profits) monitoring the content it monetizes by putting it in front of eyeballs.

The Facebook founder is fighting because he knows his platform is a targeted attack; On individual attention, via privacy-hostile behaviorally targeted ads (his euphemism for this is “relevant ads”); on social cohesion, via divisive algorithms that drive outrage in order to maximize platform engagement; and on democratic institutions and norms, by systematically eroding consensus and the potential for compromise between the different groups that every society is comprised of.

In his WSJ post Zuckerberg can only claim Facebook doesn’t “leave harmful or divisive content up”. He has no defence against Facebook having put it up and enabled it to spread in the first place.

Sociopaths relish having a soapbox so unsurprisingly these people find a wonderful home on Facebook. But where does empathy fit into the antisocial media equation?

As for Facebook being a ‘free’ service — a point Zuckerberg is most keen to impress in his WSJ post — it’s of course a cliché to point out that ‘if it’s free you’re the product’. (Or as the even older saying goes: ‘There’s no such thing as a free lunch’).

But for the avoidance of doubt, “free” access does not mean cost-free access. And in Facebook’s case the cost is both individual (to your attention and your privacy); and collective (to the public’s attention and to social cohesion).

The much bigger question is who actually benefits if “everyone” is on Facebook, as Zuckerberg would prefer. Facebook isn’t the Internet. Facebook doesn’t offer the sole means of communication, digital or otherwise. People can, and do, ‘connect’ (if you want to use such a transactional word for human relations) just fine without Facebook.

So beware the hard and self-serving sell in which Facebook’s 15-year founder seeks yet again to recast privacy as an unaffordable luxury.

Actually, Mark, it’s a fundamental human right.

The best argument Zuckerberg can muster for his goal of universal Facebook usage being good for anything other than his own business’ bottom line is to suggest small businesses could use that kind of absolute reach to drive extra growth of their own.

Though he only provides a few general data-points to support the claim; saying there are “more than 90M small businesses on Facebook” which “make up a large part of our business” (how large?) — and claiming “most” (51%?) couldn’t afford TV ads or billboards (might they be able to afford other online or newspaper ads though?); he also cites a “global survey” (how many businesses surveyed?), presumably run by Facebook itself, which he says found “half the businesses on Facebook say they’ve hired more people since they joined” (but how did you ask the question, Mark?; we’re concerned it might have been rather leading), and from there he leaps to the implied conclusion that “millions” of jobs have essentially been created by Facebook.

But did you control for common causes Mark? Or are you just trying to take credit for others’ hard work because, well, it’s politically advantageous for you to do so?

Whether Facebook’s claims about being great for small business stand up to scrutiny or not, if people’s fundamental rights are being wholesale flipped for SMEs to make a few extra bucks that’s an unacceptable trade off.

“Millions” of jobs suggestively linked to Facebook sure sounds great — but you can’t and shouldn’t overlook disproportionate individual and societal costs, as Zuckerberg is urging policymakers to here.

Let’s also not forget that some of the small business ‘jobs’ that Facebook’s platform can take definitive and major credit for creating include the Macedonia teens who became hyper-adept at seeding Facebook with fake U.S. political news, around the 2016 presidential election. But presumably those aren’t the kind of jobs Zuckerberg is advocating for.

He also repeats the spurious claim that Facebook gives users “complete control” over what it does with personal information collected for advertising.

We’ve heard this time and time again from Zuckerberg and yet it remains pure BS.

WASHINGTON, DC – APRIL 10: Facebook co-founder, Chairman and CEO Mark Zuckerberg concludes his testimony before a combined Senate Judiciary and Commerce committee hearing in the Hart Senate Office Building on Capitol Hill April 10, 2018 in Washington, DC. Zuckerberg, 33, was called to testify after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Win McNamee/Getty Images)

Yo Mark! First up we’re still waiting for your much trumpeted ‘Clear History’ tool. You know, the one you claimed you thought of under questioning in Congress last year (and later used to fend off follow up questions in the European Parliament).

Reportedly the tool is due this Spring. But even when it does finally drop it represents another classic piece of gaslighting by Facebook, given how it seeks to normalize (and so enable) the platform’s pervasive abuse of its users’ data.

Truth is, there is no master ‘off’ switch for Facebook’s ongoing surveillance. Such a switch — were it to exist — would represent a genuine control for users. But Zuckerberg isn’t offering it.

Instead his company continues to groom users into accepting being creeped on by offering pantomime settings that boil down to little more than privacy theatre — if they even realize they’re there.

‘Hit the button! Reset cookies! Delete browsing history! Keep playing Facebook!’

An interstitial reset is clearly also a dilute decoy. It’s not the same as being able to erase all extracted insights Facebook’s infrastructure continuously mines from users, using these derivatives to target people with behavioral ads; tracking and profiling on an ongoing basis by creeping on browsing activity (on and off Facebook), and also by buying third party data on its users from brokers.

Multiple signals and inferences are used to flesh out individual ad profiles on an ongoing basis, meaning the files are never static. And there’s simply no way to tell Facebook to burn your digital ad mannequin. Not even if you delete your Facebook account.

Nor, indeed, is there a way to get a complete read out from Facebook on all the data it’s attached to your identity. Even in Europe, where companies are subject to strict privacy laws that place a legal requirement on data controllers to disclose all personal data they hold on a person on request, as well as who they’re sharing it with, for what purposes, under what legal grounds.

Last year Paul-Olivier Dehaye, the founder of PersonalData.IO, a startup that aims to help people control how their personal data is accessed by companies, recounted in the UK parliament how he’d spent years trying to obtain all his personal information from Facebook — with the company resorting to legal arguments to block his subject access request.

Dehaye said he had succeeded in extracting a bit more of his data from Facebook than it initially handed over. But it was still just a “snapshot”, not an exhaustive list, of all the advertisers who Facebook had shared his data with. This glimpsed tip implies a staggeringly massive personal data iceberg lurking beneath the surface of each and every one of the 2.2BN+ Facebook users. (Though the figure is likely even more massive because it tracks non-users too.)

Zuckerberg’s “complete control” wording is therefore at best self-serving and at worst an outright lie. Facebook’s business has complete control of users by offering only a superficial layer of confusing and fiddly, ever-shifting controls that demand continued presence on the platform to use them, and ongoing effort to keep on top of settings changes (which are always, to a fault, privacy hostile), making managing your personal data a life-long chore.

Facebook’s power dynamic puts the onus squarely on the user to keep finding and hitting reset button.

But this too is a distraction. Resetting anything on its platform is largely futile, given Facebook retains whatever behavioral insights it already stripped off of your data (and fed to its profiling machinery). And its omnipresent background snooping carries on unchecked, amassing fresh insights you also can’t clear.

Nor does Clear History offer any control for the non-users Facebook tracks via the pixels and social plug-ins it’s larded around the mainstream web. Zuckerberg was asked about so-called shadow profiles in Congress last year — which led to this awkward exchange where he claimed not to know what the phrase refers to.

EU MEPs also seized on the issue, pushing him to respond. He did so by attempting to conflate surveillance and security — by claiming it’s necessary for Facebook to hold this data to keep “bad content out”. Which seems a bit of an ill-advised argument to make given how badly that mission is generally going for Facebook.

Still, Zuckerberg repeats the claim in the WSJ post, saying information collected for ads is “generally important for security and operating our services” — using this to address what he couches as “the important question of whether the advertising model encourages companies like ours to use and store more information than we otherwise would”.

So, essentially, Facebook’s founder is saying that the price for Facebook’s existence is pervasive surveillance of everyone, everywhere, with or without your permission.

Though he doesn’t express that ‘fact’ as a cost of his “free” platform. RIP privacy indeed.

Another pertinent example of Zuckerberg simply not telling the truth when he wrongly claims Facebook users can control their information vis-a-vis his ad business — an example which also happens to underline how pernicious his attempts to use “security” to justify eroding privacy really are — bubbled into view last fall, when Facebook finally confessed that mobile phone numbers users had provided for the specific purpose of enabling two-factor authentication (2FA) to increase the security of their accounts were also used by Facebook for ad targeting.

A company spokesperson told us that if a user wanted to opt out of the ad-based repurposing of their mobile phone data they could use non-phone number based 2FA — though Facebook only added the ability to use an app for 2FA in May last year.

What Facebook is doing on the security front is especially disingenuous BS in that it risks undermining security practice by bundling a respected tool (2FA) with ads that creep on people.

And there’s plenty more of this kind of disingenuous nonsense in Zuckerberg’s WSJ post — where he repeats a claim we first heard him utter last May, at a conference in Paris, when he suggested that following changes made to Facebook’s consent flow, ahead of updated privacy rules coming into force in Europe, the fact European users had (mostly) swallowed the new terms, rather than deleting their accounts en masse, was a sign people were majority approving of “more relevant” (i.e more creepy) Facebook ads.

Au contraire, it shows nothing of the sort. It simply underlines the fact Facebook still does not offer users a free and fair choice when it comes to consenting to their personal data being processed for behaviorally targeted ads — despite free choice being a requirement under Europe’s General Data Protection Regulation (GDPR).

If Facebook users are forced to ‘choose’ between being creeped on or deleting their account on the dominant social service where all their friends are it’s hardly a free choice. (And GDPR complaints have been filed over this exact issue of ‘forced consent‘.)

Add to that, as we said at the time, Facebook’s GDPR tweaks were lousy with manipulative, dark pattern design. So again the company is leaning on users to get the outcomes it wants.

It’s not a fair fight, any which way you look at it. But here we have Zuckerberg, the BS salesman, trying to claim his platform’s ongoing manipulation of people already enmeshed in the network is evidence for people wanting creepy ads.

darkened facebook logo

The truth is that most Facebook users remain unaware of how extensively the company creeps on them (per this recent Pew research). And fiddly controls are of course even harder to get a handle on if you’re sitting in the dark.

Zuckerberg appears to concede a little ground on the transparency and control point when he writes that: “Ultimately, I believe the most important principles around data are transparency, choice and control.” But all the privacy-hostile choices he’s made; and the faux controls he’s offered; and the data mountain he simply won’t ‘fess up to sitting on shows, beyond reasonable doubt, the company cannot and will not self-regulate.

If Facebook is allowed to continue setting its own parameters and choosing its own definitions (for “transparency, choice and control”) users won’t have even one of the three principles, let alone the full house, as well they should. Facebook will just keep moving the goalposts and marking its own homework.

You can see this in the way Zuckerberg fuzzes and elides what his company really does with people’s data; and how he muddies and muddles uses for the data — such as by saying he doesn’t know what shadow profiles are; or claiming users can download ‘all their data’; or that ad profiles are somehow essential for security; or by repurposing 2FA digits to personalize ads too.

How do you try to prevent the purpose limitation principle being applied to regulate your surveillance-reliant big data ad business? Why by mixing the data streams of course! And then trying to sew confusion among regulators and policymakers by forcing them to unpick your mess.

Much like Facebook is forcing civic society to clean up its messy antisocial impacts.

Europe’s GDPR is focusing the conversation, though, and targeted complaints filed under the bloc’s new privacy regime have shown they can have teeth and so bite back against rights incursions.

But before we put another self-serving Zuckerberg screed to rest, let’s take a final look at his description of how Facebook’s ad business works. Because this is also seriously misleading. And cuts to the very heart of the “transparency, choice and control” issue he’s quite right is central to the personal data debate. (He just wants to get to define what each of those words means.)

In the article, Zuckerberg claims “people consistently tell us that if they’re going to see ads, they want them to be relevant”. But who are these “people” of which he speaks? If he’s referring to the aforementioned European Facebook users, who accepted updated terms with the same horribly creepy ads because he didn’t offer them any alternative, we would suggest that’s not a very affirmative signal.

Now if it were true that a generic group of ‘Internet people’ were consistently saying anything about online ads the loudest message would most likely be that they don’t like them. Click through rates are fantastically small. And hence also lots of people using ad blocking tools. (Growth in usage of ad blockers has also occurred in parallel with the increasing incursions of the adtech industrial surveillance complex.)

So Zuckerberg’s logical leap to claim users of free services want to be shown only the most creepy ads is really a very odd one.

Let’s now turn to Zuckerberg’s use of the word “relevant”. As we noted above, this is a euphemism. It conflates many concepts but principally it’s used by Facebook as a cloak to shield and obscure the reality of what it’s actually doing (i.e. privacy-hostile people profiling to power intrusive, behaviourally microtargeted ads) in order to avoid scrutiny of exactly those creepy and intrusive Facebook practices.

Yet the real sleight of hand is how Zuckerberg glosses over the fact that ads can be relevant without being creepy. Because ads can be contextual. They don’t have to be behaviorally targeted.

Ads can be based on — for example — a real-time search/action plus a user’s general location. Without needing to operate a vast, all-pervasive privacy-busting tracking infrastructure to feed open-ended surveillance dossiers on what everyone does online, as Facebook chooses to.

And here Zuckerberg gets really disingenuous because he uses a benign-sounding example of a contextual ad (the example he chooses contains an interest and a general location) to gloss over a detail-light explanation of how Facebook’s people tracking and profiling apparatus works.

“Based on what pages people like, what they click on, and other signals, we create categories — for example, people who like pages about gardening and live in Spain — and then charge advertisers to show ads to that category,” he writes, with that slipped in reference to “other signals” doing some careful shielding work there.

Other categories that Facebook’s algorithms have been found ready and willing to accept payment to run ads against in recent years include “jew-hater”, “How to burn Jews” and “Hitler did nothing wrong”.

Funnily enough Zuckerberg doesn’t mention those actual Facebook microtargeting categories in his glossy explainer of how its “relevant” ads business works. But they offer a far truer glimpse of the kinds of labels Facebook’s business sticks on people.

As we wrote last week, the case against behavioral ads is stacking up. Zuckerberg’s attempt to spin the same self-serving lines should really fool no one at this point.

Nor should regulators be derailed by the lie that Facebook’s creepy business model is the only version of adtech possible. It’s not even the only version of profitable adtech currently available. (Contextual ads have made Google alternative search engine DuckDuckGo profitable since 2014, for example.)

Simply put, adtech doesn’t have to be creepy to work. And ads that don’t creep on people would give publishers greater ammunition to sell ad block using readers on whitelisting their websites. A new generation of people-sensitive startups are also busy working on new forms of ad targeting that bake in privacy by design.

And with legal and regulatory risk rising, intrusive and creepy adtech that demands the equivalent of ongoing strip searches of every Internet user on the planet really look to be on borrowed time.

Facebook’s problem is it scrambled for big data and, finding it easy to suck up tonnes of the personal stuff on the unregulated Internet, built an antisocial surveillance business that needs to capture both sides of its market — eyeballs and advertisers — and keep them buying to an exploitative and even abusive relationship for its business to keep minting money.

Pivoting that tanker would certainly be tough, and in any case who’d trust a Zuckerberg who suddenly proclaimed himself the privacy messiah?

But it sure is a long way from ‘move fast and break things’ to trying to claim there’s only one business model to rule them all.


Source: The Tech Crunch

Read More

How a small French privacy ruling could remake adtech for good

Posted by on Nov 20, 2018 in Adtech, Advertising Tech, data controller, data protection, digital media, DuckDuckGo, Europe, European Union, Facebook, General Data Protection Regulation, Google, iab europe, Ireland, Lawsuit, online ads, Online Advertising, Open Rights Group, Privacy, programmatic advertising, Real-time bidding, rtb, Security, Social, TC, United Kingdom, web browser | 0 comments

A ruling in late October against a little-known French adtech firm that popped up on the national data watchdog’s website earlier this month is causing ripples of excitement to run through privacy watchers in Europe who believe it signals the beginning of the end for creepy online ads.

The excitement is palpable.

Impressively so, given the dry CNIL decision against mobile “demand side platform” Vectaury was only published in the regulator’s native dense French legalese.

Digital advertising trade press AdExchanger picked up on the decision yesterday.

Here’s the killer paragraph from CNIL’s ruling — translated into “rough English” by my TC colleague Romain Dillet:

The requirement based on the article 7 above-mentioned isn’t fulfilled with a contractual clause that guarantees validly collected initial consent. The company VECTAURY should be able to show, for all data that it is processing, the validity of the expressed consent.

In plainer English, this is being interpreted by data experts as the regulator stating that consent to processing personal data cannot be gained through a framework arrangement which bundles a number of uses behind a single “I agree” button that, when clicked, passes consent to partners via a contractual relationship.

CNIL’s decision suggests that bundling consent to partner processing in a contract is not, in and of itself, valid consent under the European Union’s General Data Protection Regulation (GDPR) framework.

Consent under this regime must be specific, informed and freely given. It says as much in the text of GDPR.

But now, on top of that, the CNIL’s ruling suggests a data controller has to be able to demonstrate the validity of the consent — so cannot simply tuck consent inside a contractual “carpet-bag” that gets passed around to everyone else in their chain as soon as the user clicks “I agree.”

This is important, because many widely used digital advertising consent frameworks rolled out to websites in Europe this year — in claimed compliance with GDPR — are using a contractual route to obtain consent, and bundling partner processing behind often hideously labyrinthine consent flows.

The experience for web users in the EU right now is not great. But it could be leading to a much better internet down the road.

Where’s the consent for partner processing?

Even on a surface level the current crop of confusing consent mazes look problematic.

But the CNIL ruling suggests there are deeper and more structural problems lurking and embedded within. And as regulators dig in and start to unpick adtech contradictions it could force a change of mindset across the entire ecosystem.

As ever, when talking about consent and online ads the overarching point to remember is that no consumer given a genuine full disclosure about what’s being done with their personal data in the name of behavioral advertising would freely consent to personal details being hawked and traded across the web just so a bunch of third parties can bag a profit share.

This is why, despite GDPR being in force (since May 25), there are still so many tortuously confusing “consent flows” in play.

The longstanding online T&Cs trick of obfuscating and socially engineering consent remains an unfortunately standard playbook. But, less than six months into GDPR we’re still very much in a “phoney war” phase. More regulatory rulings are needed to lay down the rules by actually enforcing the law.

And CNIL’s recent activity suggests more to come.

In the Vectaury case, the mobile ad firm used a template framework for its consent flow that had been created by industry trade association and standards body, IAB Europe.

It did make some of its own choices, using its own wording on an initial consent screen and pre-ticking the purposes (another big GDPR no-no). But the bundling of data purposes behind a single opt in/out button is the core IAB Europe design. So CNIL’s ruling suggests there could be trouble ahead for other users of the template.

IAB Europe’s CEO, Townsend Feehan, told us it’s working on a statement reaction to the CNIL decision, but suggested Vectaury fell foul of the regulator because it may not have implemented the “Transparency & Consent Framework-compliant” consent management platform (CMP) framework — as it’s tortuously known — correctly.

So either “the ‘CMP’ that they implemented did not align to our Policies, or choices they could have made in the implementation of their CMP that would have facilitated compliance with the GDPR were not made,” she suggested to us via email.

Though that sidesteps the contractual crux point that’s really exciting privacy advocates — and making them point to the CNIL as having slammed the first of many unbolted doors.

The French watchdog has made a handful of other decisions in recent months, also involving geolocation-harvesting adtech firms, and also for processing data without consent.

So regulatory activity on the GDPR+adtech front has been ticking up.

Its decision to publish these rulings suggests it has wider concerns about the scale and privacy risks of current programmatic ad practices in the mobile space than can be attached to any single player.

So the suggestion is that just publishing the rulings looks intended to put the industry on notice…

Meanwhile, adtech giant Google has also made itself unpopular with publisher “partners” over its approach to GDPR by forcing them to collect consent on its behalf. And in May a group of European and international publishers complained that Google was imposing unfair terms on them.

The CNIL decision could sharpen that complaint too — raising questions over whether audits of publishers that Google said it would carry out will be enough for the arrangement to pass regulatory muster.

For a demand-side platform like Vectaury, which was acting on behalf of more than 32,000 partner mobile apps with user eyeballs to trade for ad cash, achieving GDPR compliance would mean either asking users for genuine consent and/or having a very large number of contracts on which it’s doing actual due diligence.

Yet Google is orders of magnitude more massive, of course.

The Vectaury file gives us a fascinating little glimpse into adtech “business as usual.” Business which also wasn’t, in the regulator’s view, legal.

The firm was harvesting a bunch of personal data (including people’s location and device IDs) on its partners’ mobile users via an SDK embedded in their apps, and receiving bids for these users’ eyeballs via another standard piece of the programmatic advertising pipe — ad exchanges and supply side platforms — which also get passed personal data so they can broadcast it widely via the online ad world’s real-time bidding (RTB) system. That’s to solicit potential advertisers’ bids for the attention of the individual app user… The wider the personal data gets spread, the more potential ad bids.

That scale is how programmatic works. It also looks horrible from a GDPR “privacy by design and default” standpoint.

The sprawling process of programmatic explains the very long list of “partners” nested non-transparently behind the average publisher’s online consent flow. The industry, as it is shaped now, literally trades on personal data.

So if the consent rug it’s been squatting on for years suddenly gets ripped out from underneath it, there would need to be radical reshaping of ad-targeting practices to avoid trampling on EU citizens’ fundamental right.

GDPR’s really big change was supersized fines. So ignoring the law would get very expensive.

Oh hai real-time bidding!

In Vectaury’s case, CNIL discovered the company was holding the personal data of a staggering 67.6 million people when it conducted an on-site inspection of the company in April 2018.

That already sounds like A LOT of data for a small mobile adtech player. Yet it might actually have been a tiny fraction of the personal data the company was routinely handling — given that Vectaury’s own website claims 70 percent of collected data is not stored.

In the decision there was no fine, but CNIL ordered the firm to delete all data it had not already deleted (having judged collection illegal given consent was not valid); and to stop processing data without consent.

But given the personal-data-based hinge of current-gen programmatic adtech, that essentially looks like an order to go out of business. (Or at least out of that business.)

And now we come to another interesting GDPR adtech complaint that’s not yet been ruled on by the two DPAs in question (Ireland and the U.K.) — but which looks even more compelling in light of the CNIL Vectaury decision because it picks at the adtech scab even more daringly.

Filed last month with the Irish Data Protection Commission and the U.K.’s ICO, this adtech complaint — the work of three individuals, Johnny Ryan of private web browser Brave; Jim Killock, exec director of digital and civil rights group, the Open Rights Group; and University College London data protection researcher, Michael Veale — targets the RTB system itself.

Here’s how Ryan, Killock and Veale summarized the complaint when they announced it last month:

Every time a person visits a website and is shown a “behavioural” ad on a website, intimate personal data that describes each visitor, and what they are watching online, is broadcast to tens or hundreds of companies. Advertising technology companies broadcast these data widely in order to solicit potential advertisers’ bids for the attention of the specific individual visiting the website.

A data breach occurs because this broadcast, known as an “bid request” in the online industry, fails to protect these intimate data against unauthorized access. Under the GDPR this is unlawful.

The GDPR, Article 5, paragraph 1, point f, requires that personal data be “processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss.” If you can not protect data in this way, then the GDPR says you can not process the data.

Ryan tells TechCrunch that the crux of the complaint is not related to the legal basis of the data sharing but rather focuses on the processing itself — arguing “that it itself is not adequately secure… that they’re aren’t adequate controls.”

Though he says there’s a consent element too, and so sees the CNIL ruling bolstering the RTB complaint. (On that keep in mind that CNIL judged Vectaury should not have been holding the RTB data of 67.6M people because it did not have valid consent.)

“We do pick up on the issue of consent in the complaint. And this particular CNIL decision has a bearing on both of those issues,” he argues. “It demonstrates in a concrete example that involved investigators going into physical premises and checking the machines — it demonstrates that even one small company was receiving tens of millions of people’s personal data in this illegal way.

“So the breach is very real. And it demonstrates that it’s not unreasonable to suggest that the consent is meaningless in any case.”

Reaching for a handy visual explainer, he continues: “If I leave a briefcase full of personal data in the middle of Charing Cross station at 11am and it’s really busy, that’s a breach. That would have been a breach back in the 1970s. If my business model is to drive up to Charing Cross station with a dump-truck and dump briefcases onto the street at 11am in the full knowledge that my business partners will all scramble around and try and grab them — and then to turn up at 11.01am and do the same thing. And then 11.02am. And every microsecond in between. That’s still a fucking data breach!

“It doesn’t matter if you think you’ve consent or anything else. You have to [comply with GDPR Article 5, paragraph 1, point f] in order to even be able to ask for a legal basis. There are plenty of other problems but that’s the biggest one that we highlighted. That’s our reason for saying this is a breach.”

“Now what CNIL has said is this company, Vectaury, was processing personal data that it did not lawfully have — and it got them through RTB,” he adds, spelling the point out. “So back to the GDPR — GDPR is saying you can’t process data in a way that doesn’t ensure protection against unauthorized or unlawful processing.”

In other words, RTB as a funnel for processing personal data looks to be on inherently shaky ground because it’s inherently putting all this personal data out there and at risk…

What’s bad for data brokers…

In another loop back, Ryan says the regulators have been in touch since their RTB complaint was filed to invite them to submit more information.

He says the CNIL Vectaury decision will be incorporated into further submissions, predicting: “This is going to be bounced around multiple regulators.”

The trio is keen to generate extra bounce by working with NGOs to enlist other individuals to file similar complaints in other EU Member States — to make the action a pan-European push, just like programmatic advertising itself.

“We now have the opportunity to connect our complaint with the excellent work that Privacy International has done, showing where these data end up, and with the excellent work that CNIL has done showing exactly how this actually applies. And this decision from CNIL takes, essentially my report that went with our complaint and shows exactly how that applies in the real world,” he continues.

“I was writing in the abstract — CNIL has now made a decision that is very much not in the abstract, it’s in the real world affecting millions of people… This will be a European-wide complaint.”

But what does programmatic advertising that doesn’t entail trading on people’s grubbily obtained personal data actually look like? If there were no personal data in bid requests Ryan believes quite a few things would happen. Such as, for e.g. the demise of clickbait.

“There would be no way to take your TechCrunch audience and buy it cheaper on some shitty website. There would be no more of that arbitrage stuff. Clickbait would die! All that nasty stuff would go away,” he suggests.

(And, well, full disclosure: We are TechCrunch — so we can confirm that does sound really great to us!)

He also reckons ad values would go up. Which would also be good news for publishers. (“Because the only place you could buy the TechCrunch audience would be on TechCrunch — that’s a really big deal!”)

He even suggests ad fraud might shrink because the incentives would shift. Or at least they could so long as the “worthy” publishers that are able to survive in the new ad world order don’t end up being complicit with bot fraud anyway.

As it stands, publishers are being screwed between the twin plates of the dominant adtech platforms (Google and Facebook), where they are having to give up a majority of their ad revenue — leaving the media industry with a shrinking slice of ad revenues (that can be as lean as ~30 percent).

That then has a knock on impact on funding newsrooms and quality journalism. And, well, on the wider web too — given all the weird incentives that operate in today’s big tech social media platform-dominated internet.

While a privacy-sucking programmatic monster is something only shadowy background data brokers that lack any meaningful relationships with the people whose data they’re feeding the beast could truly love.

And, well, Google and Facebook.

Ryan’s view is that the reason an adtech duopoly exists boils down to the “audience leakage” being enabled by RTB. Leakage which, in his view, also isn’t compliant with EU privacy laws.

He reckons the fix for this problem is equally simple: Keep doing RTB but without any personal data.

A real-time ad bidding system that’s been stripped of personal data does not mean no targeted ads. It could still support ad targeting based on real-time factors such as an approximate location (say to a city region) and/or generic and aggregated data.

Crucially it would not use unique identifiers that enable linking ad bids to a individual’s entire digital footprint and bid request history — as is the case now. Which essentially translates into: RIP privacy rights.

Ryan argues that RTB without personal data would still offer plenty of “value” to advertisers — who could still reach people based on general locations and via real-time interests. (It’s a model that sounds much like what privacy search engine DuckDuckGo is doing, and also been growing.)

The really big problem, though, is turning the behavioral ad tanker around. Given that the ecosystem is embedded, even as the duopoly milks it.

That’s also why Ryan is so hopeful now, though, having parsed the CNIL decision.

His reading is regulators will play a decisive role in pushing the ad industry’s trigger — and force through much-needed change in their targeting behavior.

“Unless the entire industry moves together, no one can be the first to remove personal data from bid requests but if the regulators step in in a big way… and say you’re all going to go out of business if you keep putting personal data into bid requests then everyone will come together — like the music industry was forced to eventually, under Steve Jobs,” he argues. “Everyone can together decide on a new short term disadvantageous but long term highly advantageous change.”

Of course such a radical reshaping is not going to happen overnight. Regulatory triggers tend to be slow motion unfoldings at the best of times. You also have to factor in the inexorable legal challenges.

But look closely and you’ll see both momentum massing behind privacy — and regulatory writing on the wall.

“Are we going to see programmatic forced to be non-personal and therefore better for every single citizen of the world (except, say, if they work for a data broker),” adds Ryan, posing his own concluding question. “Will that massive change, which will help society and the web… will that change happen before Christmas? No. But it’s worth working on. And it’s going to take some time.

“It could be two years from now that we have the finality. But a finality there will be. Detroit was only able to fight against regulation for so long. It does come.”

Who’d have though “taking back control” could ever sound so good?


Source: The Tech Crunch

Read More