Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Targeted ads offer little extra value for online publishers, study suggests

Posted by on May 31, 2019 in Adtech, Advertising Tech, Alphabet, behavioral advertising, digital advertising, digital marketing, display advertising, Europe, Facebook, General Data Protection Regulation, IAB, Marketing, Media, Online Advertising, Privacy, programmatic advertising, Randall Rothenberg, Richard blumenthal, targeted advertising, United States | 0 comments

How much value do online publishers derive from behaviorally targeted advertising that uses privacy-hostile tracking technologies to determine which advert to show a website user?

A new piece of research suggests publishers make just 4% more vs if they were to serve a non-targeted ad.

It’s a finding that sheds suggestive light on why so many newsroom budgets are shrinking and journalists finding themselves out of work — even as adtech giants continue stuffing their coffers with massive profits.

Visit the average news website lousy with third party cookies (yes, we know, it’s true of TC too) and you’d be forgiven for thinking the publisher is also getting fat profits from the data creamed off their users as they plug into programmatic ad systems that trade info on Internet users’ browsing habits to determine the ad which gets displayed.

Yet while the online ad market is massive and growing — $88BN in revenues in the US in 2017, per IAB data, a 21% year-on-year increase — publishers are not the entities getting filthy rich off of their own content.

On the contrary, research in recent years has suggested that a large proportion of publishers are being squeezed by digital display advertising economics, with some 40% reporting either stagnant or shrinking ad revenue, per a 2015 Econsultancy study. (Hence, we can posit, the rise in publishers branching into subscriptions — TC’s own offering can be found here: Extra Crunch).

The lion’s share of value being created by digital advertising ends up in the coffers of adtech giants, Google and Facebook . Aka the adtech duopoly. In the US, the pair account for around 60% of digital ad market spending, per eMarketer — or circa $76.57BN.

Their annual revenues have mirrored overall growth in digital ad spend — rising from $74.9BN to $136.8BN, between 2015 and 2018, in the case of Google’s parent Alphabet; and $17.9BN to $55.8BN for Facebook. (While US online ad spend stepped up from $59.6BN to $107.5BN+ between 2015 and 2018.)

eMarketer projects 2019 will mark the first decline in the duopoly’s collective share. But not because publishers’ fortunes are suddenly set for a bonanza turnaround. Rather another tech giant — Amazon — has been growing its share of the digital ad market, and is expected to make what eMarketer dubs the start of “a small dent in the duopoly”.

Behavioral advertising — aka targeted ads — has come to dominate the online ad market, fuelled by platform dynamics encouraging a proliferation of tracking technologies and techniques in the unregulated background. And by, it seems, greater effectiveness from the perspective of online advertisers, as the paper notes. (“Despite measurement and attribution challenges… many studies seem to concur that targeted advertising is beneficial and effective for advertising firms.”

This has had the effect of squeezing out non-targeted display ads, such as those that rely on contextual factors to select the ad — e.g. the content being viewed, device type or location.

The latter are now the exception; a fall-back such as for when cookies have been blocked. (Albeit, one that veteran pro-privacy search engine, DuckDuckGo, has nonetheless turned into a profitable contextual ad business).

One 2017 study by IHS Markit, suggested that 86% of programmatic advertising in Europe was using behavioural data. While even a quarter (24%) of non-programmatic advertising was found to be using behavioural data, per its model. 

“In 2016, 90% of the digital display advertising market growth came from formats and processes that use behavioural data,” it observed, projecting growth of 106% for behaviourally targeted advertising between 2016 and 2020, and a decline of 63.6% for forms of digital advertising that don’t use such data.

The economic incentives to push behavioral advertising vs non-targeted ads look clear for dominant platforms that rely on amassing scale — across advertisers, other people’s eyeballs, content and behavioral data — to extract value from the Internet’s dispersed and diverse audience.

But the incentives for content producers to subject themselves — and their engaged communities of users — to these privacy-hostile economies of scale look a whole lot more fuzzy.

Concern about potential imbalances in the online ad market is also leading policymakers and regulators on both sides of the Atlantic to question the opacity of the market — and call for greater transparency.

A price on people tracking’s head

The new research, which will be presented at the Workshop on the Economics of Information Security conference in Boston next week, aims to contribute a new piece to this digital ad revenue puzzle by trying to quantify the value to a single publisher of choosing ads that are behaviorally targeted vs those that aren’t.

We’ve flagged the research before — when the findings were cited by one of the academics involved in the study at an FTC hearing — but the full paper has now been published.

It’s called Online Tracking and Publishers’ Revenues: An Empirical Analysis, and is co-authored by three academics: Veronica Marotta, an assistant professor in information and decision sciences at the Carlson School of Management, University of Minnesota; Vibhanshu Abhishek, associate professor of information systems at the Paul Merage School of Business, University California Irvine; and Alessandro Acquisti, professor of IT and public policy at Carnegie Mellon University.

“While the impact of targeted advertising on advertisers’ campaign effectiveness has been vastly documented, much less is known about the value generated by online tracking and targeting technologies for publishers – the websites that sell ad spaces,” the researchers write. “In fact, the conventional wisdom that publishers benefit too from behaviorally targeted advertising has rarely been scrutinized in academic studies.”

“As we briefly mention in the paper, notwithstanding claims about the shared benefits of online tracking and behaviorally targeting for multiple stakeholders (merchants, publishers, consumers, intermediaries…), there is a surprising paucity of empirical estimates of economic outcomes from independent researchers,”  Acquisti also tells us.

In fact, most of the estimates focus on the advertisers’ side of the market (for instance, there have been quite a few studies estimating the increase in click-through or conversion rates associated with targeted ads); much less is known about the publishers’ side of the market. So, going into the study, we were genuinely curious about what we may find, as there was little in terms of data that could anchor our predictions.

“We did have theoretical bases to make possible predictions, but those predictions could be quite antithetical. Under one story, targeting increases the value of the audience, which increases advertisers’ bids, which increases publishers’ revenues; under a different story, targeting decreases the ‘pool’ of audience interested in an ad, which decreases competition to display ads, which reduces advertisers’ bids, eventually reducing publishers’ revenues.”

For the study the researchers were provided with a data-set comprising “millions” of display ad transactions completed in a week across multiple online outlets owned by a single (unidentified) large publisher which operates websites in a range of verticals such as news, entertainment and fashion.

The data-set also included whether or not the site visitor’s cookie ID is available — enabling analysis of the price difference between behaviorally targeted and non-targeted ads. (The researchers used a statistical mechanism to control for systematic differences between users who impede cookies.)

As noted above, the top-line finding is only a very small gain for the publisher whose data they were analyzing — of around 4%. Or an average increase of $0.00008 per advertisement. 

It’s a finding that contrasts wildly with some of the loud yet unsubstantiated opinions which can be found being promulgated online — claiming the ‘vital necessity’ of behavorial ads to support publishers/journalism.

For example, this article, published earlier this month by a freelance journalist writing for The American Prospect, includes the claim that: “An online advertisement without a third-party cookie sells for just 2 percent of the cost of the same ad with the cookie.” Yet does not specify a source for the statistic it cites.

(The author told us the reference is to a 2018 speech made by Index Exchange’s Andrew Casale, when he suggested ad requests without a buyer ID receive 99% lower bids vs the same ad request with the identifier. She added that her conversations with people in the adtech industry had suggested a spread between a 99% and 97% decline in the value of an ad without a cookie, hence choosing a middle point.)

At the same time policymakers in the US now appear painfully aware how far behind Europe they are lagging where privacy regulation is concerned — and are fast dialling up their scrutiny of and verbal horror over how Internet users are tracked and profiled by adtech giants.

At a Senate Judiciary Committee hearing earlier this month — convened with the aim of “understanding the digital ad ecosystem and the impact of data privacy and competition policy” — the talk was not if to regulate big tech but how hard they must crack down on monopolistic ad giants.

“That’s what brings us here today. The lack of choice [for consumers to preserve their privacy online],” said senator Richard Blumenthal. “The excessive and extraordinary power of Google and Facebook and others who dominate the market is a fact of life. And so privacy protection is absolutely vital in the short run.”

The kind of “invasive surveillance” that the adtech industry systematically deploys is “something we would never tolerate from a government but Facebook and Google have the power of government never envisaged by our founders,” Blumenthal went on, before a few of the types of personal data that are sucked up and exploited by the adtech industrial surveillance complex: “Health, dating, location, finance, extremely personal details — offered to anyone with almost no restraint.”

Bearing that “invasive surveillance” in mind, a 4% publisher ‘premium’ for privacy-hostile ads vs adverts that are merely contextually served (and so don’t require pervasive tracking of web users) starts to look like a massive rip off — of both publisher brand and audience value, as well as Internet users’ rights and privacy.

Yes, targeted ads do appear to generate a small revenue increase, per the study. But as the researchers also point out that needs to be offset against the cost to publishers of complying with privacy regulations.

“If setting tracking cookies on visitors was cost free, the website would definitely be losing money. However, the widespread use of tracking cookies – and, more broadly, the practice of tracking users online – has been raising privacy concerns that have led to the adoption of stringent regulations, in particular in the European Union,” they write — going on to cite an estimate by the International Association of Privacy Professionals that Fortune’s Global 500 companies will spend around $7.8BN on compliant costs to meet the requirements of Europe’s General Data Protection Regulation (GDPR). 

Wider costs to systematically eroding online privacy are harder to put a value on for publishers. But should also be considered — whether it’s the costs to a brand reputation and user loyalty as a result of a publisher larding their sites with unwanted trackers; to wider societal costs — linked to the risks of data-fuelled manipulation and exploitation of vulnerable groups. Simply put, it’s not a good look.

Publishers may appear complicit in the asset stripping of their own content and audiences for what — per this study — seems only marginal gain, but the opacity of the adtech industry implies that most likely don’t realize exactly what kind of ‘deal’ they’re getting at the hands of the ad giants who grip them.

Which makes this research paper a very compelling read for the online publishing industry… and, well, a pretty awkward newsflash for anyone working in adtech.

 

While the study only provides a snapshot of ad market economics, as experienced by a single publisher, the glimpse it presents is distinctly different from the picture the adtech lobby has sought to paint, as it has ploughed money into arguing against privacy legislation — on the claimed grounds that ‘killing behavioural advertising would kill free online content’. 

Saying no more creepy ads might only marginally reduce publishers’ revenue doesn’t have quite the same doom-laden ring, clearly.

“In a nutshell, this study provides an initial data point on a portion of the advertising ecosystem over which claims had been made but little empirical verification was completed. The results highlight the need for more transparency over how the value generated by flows of data gets allocated to different stakeholders,” says Acquisti, summing up how the study should be read against the ad market as a whole.

Contacted for a response to the research, Randall Rothenberg, CEO of advertising business organization, the IAB, agreed that the digital supply chain is “too complex and too opaque” — and also expressed concern about how relatively little value generated by targeted ads is trickling down to publishers.

“One week’s worth of data from one unidentified publisher does not make for a projectible (sic) piece of research. Still, the study shows that targeted advertising creates immense value for brands — more than 90% of the unnamed publisher’s auctioned ads were sold with targeting attached, and advertisers were willing to pay a 60% premium for those ads. Yet very little of that value flowed to the publisher,” he told TechCrunch. “As IAB has been saying for a decade, the digital supply chain is too complex and too opaque, and this diversion of value is more proof that transparency is required so that publishers can benefit from the value they create.”

The research paper includes discussion of the limitations to the approach, as well as ideas for additional research work — such as looking at how the value of cookies changes depending on how much information they contain (on that they write of their initial findings: “Information seem to be very valuable (from the publisher’s perspective) when we compare cookies with very little information to cookies with some information; after a certain point, adding more information to a cookie does not seem to create additional value for the publisher”); and investigating how “the (un)availability of a cookie changes the competition in the auction” — to try to understand ad auction competition dynamics and the potential mechanisms at play.

“This is one new and hopefully useful data point, to which others must be added,” Acquisti also told us in concluding remarks. “The key to research work is incremental progress, with more studies progressively adding a clearer understanding of an issue, and we look forward to more research in this area.”

This report was updated with additional comment


Source: The Tech Crunch

Read More

You might hate it, but Facebook Stories now has 500M users

Posted by on Apr 24, 2019 in Advertising Tech, Apps, Facebook, Facebook ads, Facebook Earnings, Facebook Earnings Q1 2019, Facebook Stories, Facebook Stories Ads, instagram, Instagram Stories, Mobile, Social, TC, WhatsApp, WhatsApp Status | 0 comments

You might think it’s redundant with Instagram Stories, or just don’t want to see high school friends’ boring lives, but ephemeral Snapchat-style Stories now have 500 million daily users across Facebook and Messenger. WhatsApp’s Stories feature Status has 500 million dailies too, and Instagram hit that milestone three months ago. That’s impressive, because it means one-third of Facebook’s 1.56 billion daily users are posting or watching Stories each day, up from zero when Facebook launched the feature two years ago.

For reference, Stories inventor Snapchat has just 190 million total daily users.

Facebook Stories

CEO Mark Zuckerberg announced the new stats on today’s Facebook Q1 2019 earnings call, which showed it’s user growth rate had increased but it had to save $3 billion for a potential FTC fine over privacy practices.

Facebook isn’t just using Stories to keep people engaged, but to squeeze more cash out of them. Today COO Sheryl Sandberg announced that 3 million advertisers have now bought Stories ads across Facebook’s family of apps. I’d expect Facebook to launch a Stories Ad Network soon so other apps can show Facebook’s vertical video ads and get a cut of the revenue.

Facebook’s aggressive move to clone Snapchat Stories not just in Instagram but everywhere might have pissed users off at first, but many of them have come around. If you give people a place to put their face at the top of their friends’ phones, they’ll fill it. And if someone dangles a window into the lives of people you know and people you wish you did, you’ll open that window regularly.


Source: The Tech Crunch

Read More

Google removed 2.3B bad ads, banned ads on 1.5M apps + 28M pages, plans new Policy Manager this year

Posted by on Mar 14, 2019 in adsense, Advertising, Advertising Tech, bad ads, Google | 0 comments

Google is a tech powerhouse in many categories, including advertising. Today, as part of its efforts to improve how that ad business works, it provided an annual update that details the progress it’s made to shut down some of the more nefarious aspects of it.

Using both manual reviews and machine learning, in 2018, Google said removed 2.3 billion “bad ads” that violated its policies, which at their most general forbid ads that mislead or exploit vulnerable people. Along with that, Google has been tackling the other side of the “bad ads” conundrum: pinpointing and shutting down sites that violate policies and also profit from using its ad network: Google said it removed ads from 1.5 million apps and nearly 28 million pages that violated publisher policies.

On the more proactive side, the company also said today that it is introducing a new Ad Policy Manager in April to give tips to publishers to avoid listing non-compliant ads in the first place.

Google’s ad machine makes billions for the company — more than $32 billion in the previous quarter, accounting for 83 percent of all Google’s revenues. Those revenues underpin a variety of wildly popular, free services such as Gmail, YouTube, Android and of course its search engine — but there is undoubtedly a dark side, too: bad ads that slip past the algorithms and mislead or exploit vulnerable people, and sites that exploit Google’s ad network by using it to fund the spread of misleading information, or worse.

Notably, Google’s 2.3 billion figure is nearly 1 billion less ads than it removed last year for policy violations.

While Google has continued to improve its ability to track and stop these ads before they make their way to its network, Google said in a response to TC that the lower number was actually because it has shifted its focus to removing bad accounts rather than individual bad ads — the idea being that one can be responsible for multiple bad ads.

Indeed, the number of bad accounts that got removed in 2018, nearly 1 million, was double the figure in 2017, and that would mean the bad ads are not hitting the network in the first place.

“By removing one bad account, we’re blocking someone who could potentially run thousands of bad ads,” a company spokesperson said. “This helps to address the root cause of bad ads and allows us to better protect our users.”

Meanwhile, while the ad business continues to grow, that growth has been slowing just a little in competition with other players like Facebook and Amazon.

The more cynical question one might ask here is whether Google removed less ads to improve its bottom line. But in reality, remaining vigilant about all the bad stuff is more than just Google doing the right thing. It’s been shown that some advertisers will walk away rather than be associated with nefarious or misleading content. Recent YouTube ad pulls by huge brands like AT&T, Nestle and Epic Games — after it was found that pedophiles have been lurking in the comments of YouTube videos — shows that there are still more frontiers that Google will need to tackle in the future to keep its house — and business — in order.

For now, it’s focusing on ads, apps, website pages, and the publishers who run them all.

On the advertising front, Google’s director of sustainable ads, Scott Spencer, highlighted ads removed from several specific categories this year: there were nearly 207,000 ads for ticket resellers, 531,000 ads for bail bonds and 58.8 million phishing ads taken out of the network.

Part of this was based on the company identifying and going after some of these areas, either on its own steam or because of public pressure. In one case, for ads for drug rehab clinics, the company removed all ads for these after an expose, before reintroducing them again a year later. Some 31 new policies were added in the last year to cover more categories of suspicious ads, Spencer said. One of these included cryptocurrencies: it will be interesting to see how and if this one becomes a more prominent part of the mix in the years ahead. 

Because ads are like the proverbial trees falling in the forest — you have to be there to hear the sound — Google is also continuing its efforts to identify bad apps and sites that are hosting ads from its network (both the good and bad).

On the website front, it created 330 new “detection classifiers” to seek out specific pages that are violating policies. Google’s focus on page granularity is part of a bigger effort it has made to add more page-specific tools overall to its network — it also introduced page-level “auto-ads” last year — so this is about better housekeeping as it works on ways to expand its advertising business. The efforts to use this to ID “badness” at page level led Google to shut down 734,000 publishers and app developers, removing ads from 1.5 million apps and 28 million pages that violated policies.

Fake news also continues to get a name check in Google’s efforts.

The focus for both Google and Facebook in the last year has been around how its networks are used to manipulate democratic processes. No surprise there: this is an area where they have been heavily scrutinised by governments. The risk is that, if they do not demonstrate that they are not lazily allowing dodgy political ads on their network — because after all those ads do still represent ad revenues — they might find themselves in regulatory hot water, with more policies being enforced from the outside to curb their operations.

This past year, Google said that it verified 143,000 election ads in the US — it didn’t note how many it banned — and started to provide new data to people about who is really behind these ads. The same will be launched in the EU and India this year ahead of elections in those regions.

The new policies it’s introducing to improve the range of sites it indexes and helps people find are also taking shape. Some 1.2 million pages, 22,000 apps and 15,000 sites were removed from its ad network for violating policies around misrepresentative, hateful or other low-quality content. These included 74,000 pages and 190,000 ads that violated its “dangerous or derogatory” content policy.

Looking ahead, the new dashboard that Google announced it would be launching next month is a self-help tool for advertisers: using machine learning, Google will scan ads before they are uploaded to the network to determine whether they violate any policies. At launch, it will look at ads, keywords and extensions across a publisher’s account (not just the ad itself).

Over time, Google said, it will also give tips to the publishers in real time to help fix them if there are problems, along with a history of appeals and certifications.

This sounds like a great idea for ad publishers who are not in the market for peddling iffy content: more communication and quick responses are what they want so that if they do have issues, they can fix them and get the ads out the door. (And that, of course, will also help Google by ushering in more inventory, faster and with less human involvement.)

More worrying, in my opinion, is how this might get misused by bad actors. As malicious hacking has shown us, creating screens sometimes also creates a way for malicious people to figure out loopholes for bypassing them.


Source: The Tech Crunch

Read More

Former CEO Zain Jaffer files wrongful termination lawsuit against Vungle

Posted by on Mar 12, 2019 in Advertising Tech, Lawsuit, Mobile, vungle, Zain Jaffer | 0 comments

Vungle founder Zain Jaffer filed a lawsuit today accusing the mobile advertising company of wrongfully terminating him from the role of CEO.

The lawsuit cites a section of the California labor code that it says “expressly and specifically prohibits discrimination and retaliation by employers based upon an arrest or detention that did not result in conviction.”

Jaffer was arrested in October 2017 in an incident involving his young son — the charges included performing a lewd act on a child and assault with a deadly weapon. Last year, the charges were dropped, with the San Mateo District Attorney’s Office saying it did “not believe that there was any sexual conduct by Mr. Jaffer that evening,” while “the injuries were the result of Mr. Jaffer being in a state of unconsciousness caused by prescription medication.”

Afterwards, Jaffer began looking into either selling his Vungle shares or pursuing a leadership change at the company, something he alludes to in his statement on the suit:

Once I was absolved of any wrongdoing, I was looking forward to a friendly relationship with the Company. Instead, Vungle unfairly and unlawfully sought to destroy my career, blocked my efforts to sell my own shares or transfer shares to family members, and tried to prevent me from purchasing shares in the Company.

When reached by TechCrunch, a Vungle spokesperson declined to comment on the lawsuit.

The suit does not specify the amount that Jaffer is seeking, but his attorney Joann Rezzo reportedly told Bloomberg that he has suffered at least $100 million worth of harm. When asked about damages, Jaffer’s spokesperson sent us the following statement from Rezzo:

The amount to be awarded would be entirely within the discretion of the jury. My firm won almost $20M for an employee who asserted similar claims against Allstate Insurance Company. Mr. Jaffer’s potential recovery is much, much higher.

The suit she’s referring to involved a former Allstate employee who was awarded $18.6 million after he was fired following an arrest for domestic violence and possession of marijuana paraphernalia. All the charges were eventually dismissed.

You can read Jaffer’s full lawsuit below.

Jaffer v. Vungle Conformed … by on Scribd


Source: The Tech Crunch

Read More

Cookie walls don’t comply with GDPR, says Dutch DPA

Posted by on Mar 8, 2019 in Advertising Tech, cookie consent, cookie walls, data protection, data protection law, dutch dpa, Europe, GDPR, General Data Protection Regulation, Google, Online Advertising, Privacy, targeted advertising | 0 comments

Cookie walls that demand a website visitor agrees to their Internet browsing being tracked for ad-targeting as the ‘price’ of entry to the site are not compliant with European data protection law, the Dutch data protection agency clarified yesterday.

The DPA said it has received dozens of complaints from Internet users who had had their access to websites blocked after refusing to accept tracking cookies — so it has taken the step of publishing clear guidance on the issue.

It also says it will be stepping up monitoring, adding that it has written to the most complained about organizations (without naming any names) — instructing them to make changes to ensure they come into compliance with GDPR.

Europe’s General Data Protection Regulation, which came into force last May, tightens the rules around consent as a legal basis for processing personal data — requiring it to be specific, informed and freely given in order for it to be valid under the law.

Of course consent is not the only legal basis for processing personal data but many websites do rely on asking Internet visitors for consent to ad cookies as they arrive.

And the Dutch DPA’s guidance makes it clear Internet visitors must be asked for permission in advance for any tracking software to be placed — such as third party tracking cookies; tracking pixels; and browser fingerprinting tech — and that that permission must be freely obtained. Ergo, a free choice must be offered.

So, in other words, a ‘data for access’ cookie wall isn’t going to cut it. (Or, as the DPA puts it: “Permission is not ‘free’ if someone has no real or free choice. Or if the person cannot refuse giving permission without adverse consequences.”)

“This is not for nothing; website visitors must be able to trust that their personal data are properly protected,” it further writes in a clarification published on its website [translated via Google Translate].

“There is no objection to software for the proper functioning of the website and the general analysis of the visit on that site. More thorough monitoring and analysis of the behavior of website visitors and the sharing of this information with other parties is only allowed with permission. That permission must be completely free,” it adds. 

We’ve reached out to the DPA with questions.

In light of this ruling the cookie wall on the Internet Advertising Bureau (IAB)’s European site (screengrabbed below) looks like a textbook example of what not to do — given the online ad industry association is bundling multiple cookie uses (site functional cookies; site analytical cookies; and third party advertising cookies) under a single ‘I agree’ option.

It does not offer visitors any opt-outs at all. (Not even under the ‘More info’ or privacy policy options pictured below).

If the user does not click ‘I agree’ they cannot gain access to the IAB’s website. So there’s no free choice here. It’s agree or leave.

Clicking ‘More info’ brings up additional information about the purposes the IAB uses cookies for — where it states it is not using collected information to create “visitor profiles”.

However it notes it is using Google products, and explains that some of these use cookies that may collect visitors’ information for advertising — thereby bundling ad tracking into the provision of its website ‘service’.

Again the only ‘choice’ offered to site visitors is ‘I agree’ or to leave without gaining access to the website. Which means it’s not a free choice.

The IAB told us no data protection agencies had been in touch regarding its cookie wall.

Asked whether it intends to amend the cookie wall in light of the Dutch DPA’s guidance a spokeswoman said she wasn’t sure what the team planned to do yet — but she claimed GDPR does not “outright prohibit making access to a service conditional upon consent”; pointing also to the (2002) ePrivacy Directive which she claimed applies here, saying it “also includes recital language to the effect of saying that website content can be made conditional upon the well-informed acceptance of cookies”.

So the IAB’s position appears to be that the ePrivacy Directive trumps GDPR on this issue.

Though it’s not clear how they’ve arrived at that conclusion. (The fifteen+ year old ePrivacy Directive is also in the process of being updated — while the flagship GDPR only came into force last year.)

The portion of the ePrivacy Directive that the IAB appears to be referring to is recital 25 — which includes the following line:

Access to specific website content may still be made conditional on the well-informed acceptance of a cookie or similar device, if it is used for a legitimate purpose.

However “specific website content” is hardly the same as full site access, i.e. as is entirely blocked by their cookie wall.

The “legitimate purpose” point in the recital also provides a second caveat vis-a-vis making access conditional on accepting cookies — and the recital text includes an example of “facilita[ting] the provision of information society services” as such a legitimate purpose.

What are “information society services”? An earlier European directive defines this legal term as services that are “provided at a distance, electronically and at the individual request of a recipient” [emphasis ours] — suggesting it refers to Internet content that the user actually intends to access (i.e. the website itself), rather than ads that track them behind the scenes as they surf.

So, in other words, even per the outdated ePrivacy Directive, a site might be able to require consent for functional cookies from a user to access a portion of the site.

But that’s not the same as saying you can gate off an entire website unless the visitor agrees to their browsing being pervasively tracked by advertisers.

That’s not the kind of ‘service’ website visitors are looking for. 

Add to that, returning to present day Europe, the Dutch DPA has put out very clear guidance demolishing cookie walls.

The only sensible legal interpretation here is that the writing is on the wall for cookie walls.


Source: The Tech Crunch

Read More

UK parliament calls for antitrust, data abuse probe of Facebook

Posted by on Feb 18, 2019 in Advertising Tech, app developers, Artificial Intelligence, ashkan soltani, business model, Cambridge Analytica, competition law, data protection law, DCMS committee, election law, Europe, Facebook, Federal Trade Commission, GSR, information commissioner's office, Mark Zuckerberg, Mike Schroepfer, Moscow, Policy, Privacy, russia, Security, Social, Social Media, social media platforms, United Kingdom, United States | 0 comments

A final report by a British parliamentary committee which spent months last year investigating online political disinformation makes very uncomfortable reading for Facebook — with the company singled out for “disingenuous” and “bad faith” responses to democratic concerns about the misuse of people’s data.

In the report, published today, the committee has also called for Facebook’s use of user data to be investigated by the UK’s data watchdog.

In an evidence session to the committee late last year, the Information Commissioner’s Office (ICO) suggested Facebook needs to change its business model — warning the company risks burning user trust for good.

Last summer the ICO also called for an ethical pause of social media ads for election campaigning, warning of the risk of developing “a system of voter surveillance by default”.

Interrogating the distribution of ‘fake news’

The UK parliamentary enquiry looked into both Facebook’s own use of personal data to further its business interests, such as by providing access to users’ data to developers and advertisers in order to increase revenue and/or usage of its own platform; and examined what Facebook claimed as ‘abuse’ of its platform by the disgraced (and now defunct) political data company Cambridge Analytica — which in 2014 paid a developer with access to Facebook’s developer platform to extract information on millions of Facebook users in build voter profiles to try to influence elections.

The committee’s conclusion about Facebook’s business is a damning one with the company accused of operating a business model that’s predicated on selling abusive access to people’s data.

Far from Facebook acting against “sketchy” or “abusive” apps, of which action it has produced no evidence at all, it, in fact, worked with such apps as an intrinsic part of its business model,” the committee argues. This explains why it recruited the people who created them, such as Joseph Chancellor [the co-founder of GSR, the developer which sold Facebook user data to Cambridge Analytica]. Nothing in Facebook’s actions supports the statements of Mark Zuckerberg who, we believe, lapsed into “PR crisis mode”, when its real business model was exposed.

“This is just one example of the bad faith which we believe justifies governments holding a business such as Facebook at arms’ length. It seems clear to us that Facebook acts only when serious breaches become public. This is what happened in 2015 and 2018.”

“We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that ‘we’ve never sold anyone’s data” is simply untrue’,” the committee also concludes.

We’ve reached out to Facebook for comment on the committee’s report. Update: Facebook said it rejects all claims it breached data protection and competition laws.

In a statement attributed to UK public policy manager, Karim Palant, the company told us:

We share the Committee’s concerns about false news and election integrity and are pleased to have made a significant contribution to their investigation over the past 18 months, answering more than 700 questions and with four of our most senior executives giving evidence.

We are open to meaningful regulation and support the committee’s recommendation for electoral law reform. But we’re not waiting. We have already made substantial changes so that every political ad on Facebook has to be authorised, state who is paying for it and then is stored in a searchable archive for 7 years. No other channel for political advertising is as transparent and offers the tools that we do.

We also support effective privacy legislation that holds companies to high standards in their use of data and transparency for users.

While we still have more to do, we are not the same company we were a year ago. We have tripled the size of the team working to detect and protect users from bad content to 30,000 people and invested heavily in machine learning, artificial intelligence and computer vision technology to help prevent this type of abuse.

Last fall Facebook was issued the maximum possible fine under relevant UK data protection law for failing to safeguard user data from Cambridge Analytica saga. Although it is appealing the ICO’s penalty, claiming there’s no evidence UK users’ data got misused.

During the course of a multi-month enquiry last year investigating disinformation and fake news, the Digital, Culture, Media and Sport (DCMS) committee heard from 73 witnesses in 23 oral evidence sessions, as well as taking in 170 written submissions. In all the committee says it posed more than 4,350 questions.

Its wide-ranging, 110-page report makes detailed observations on a number of technologies and business practices across the social media, adtech and strategic communications space, and culminates in a long list of recommendations for policymakers and regulators — reiterating its call for tech platforms to be made legally liable for content.

Among the report’s main recommendations are:

  • clear legal liabilities for tech companies to act against “harmful or illegal content”, with the committee calling for a compulsory Code of Ethics overseen by a independent regulatory with statutory powers to obtain information from companies; instigate legal proceedings and issue (“large”) fines for non-compliance
  • privacy law protections to cover inferred data so that models used to make inferences about individuals are clearly regulated under UK data protection rules
  • a levy on tech companies operating in the UK to support enhanced regulation of such platforms
  • a call for the ICO to investigate Facebook’s platform practices and use of user data
  • a call for the Competition Markets Authority to comprehensively “audit” the online advertising ecosystem, and also to investigate whether Facebook specifically has engaged in anti-competitive practices
  • changes to UK election law to take account of digital campaigning, including “absolute transparency of online political campaigning” — including “full disclosure of the targeting used” — and more powers for the Electoral Commission
  • a call for a government review of covert digital influence campaigns by foreign actors (plus a review of legislation in the area to consider if it’s adequate) — including the committee urging the government to launch independent investigations of recent past elections to examine “foreign influence, disinformation, funding, voter manipulation, and the sharing of data, so that appropriate changes to the law can be made and lessons can be learnt for future elections and referenda”
  • a requirement on social media platforms to develop tools to distinguish between “quality journalism” and low quality content sources, and/or work with existing providers to make such services available to users

Among the areas the committee’s report covers off with detailed commentary are data use and targeting; advertising and political campaigning — including foreign influence; and digital literacy.

It argues that regulation is urgently needed to restore democratic accountability and “make sure the people stay in charge of the machines”.

“Protecting our data helps us secure the past, but protecting inferences and uses of Artificial Intelligence (AI) is what we will need to protect our future,” the committee warns.

Ministers are due to produce a White Paper on social media safety regulation this winter and the committee writes that it hopes its recommendations will inform government thinking.

“Much has been said about the coarsening of public debate, but when these factors are brought to bear directly in election campaigns then the very fabric of our democracy is threatened,” says the committee. “This situation is unlikely to change. What does need to change is the enforcement of greater transparency in the digital sphere, to ensure that we know the source of what we are reading, who has paid for it and why the information has been sent to us. We need to understand how the big tech companies work and what happens to our data.”

The report calls for tech companies to be regulated as a new category, “not necessarily either a ‘platform’ or a ‘publisher”, but which legally tightens their liability for harmful content published on their platforms.

Last month another UK parliamentary committee also urged the government to place a legal ‘duty of care’ on platforms to protect users under the age of 18. The government said then that it has not ruled out doing so.

We’ve reached out to the DCMS for a response to the latest committee report. Update: A department spokesperson told us:

The Government’s forthcoming White Paper on Online Harms will set out a new framework for ensuring disinformation is tackled effectively, while respecting freedom of expression and promoting innovation.

This week the Culture Secretary will travel to the United States to meet with tech giants including Google, Facebook, Twitter and Apple to discuss many of these issues.

We welcome this report’s contribution towards our work to tackle the increasing threat of disinformation and to make the UK the safest place to be online. We will respond in due course.

“Digital gangsters”

Competition concerns are also raised several times by the committee.

“Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law,” the DCMS committee writes, going on to urge the government to investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of its business practices towards other developers “to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail”. 

“The big tech companies must not be allowed to expand exponentially, without constraint or proper regulatory oversight,” it adds.

The committee suggests existing legal tools are up to the task of reining in platform power, citing privacy laws, data protection legislation, antitrust and competition law — and calling for a “comprehensive audit” of the social media advertising market by the UK’s Competition and Markets Authority, and a specific antitrust probe of Facebook’s business practices.

“If companies become monopolies they can be broken up, in whatever sector,” the committee points out. “Facebook’s handling of personal data, and its use for political campaigns, are prime and legitimate areas for inspection by regulators, and it should not be able to evade all editorial responsibility for the content shared by its users across its platforms.”

The social networking giant was the recipient of many awkward queries during the course of the committee’s enquiry but it refused repeated requests for its founder Mark Zuckerberg to testify — sending a number of lesser staffers in his stead.

That decision continues to be seized upon by the committee as evidence of a lack of democratic accountability. It also accuses Facebook of having an intentionally “opaque management structure”.

“By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world,” the committee writes.

“The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which—unsurprisingly—failed to address all of our questions. We are left in no doubt that this strategy was deliberate.”

It doubles down on the accusation that Facebook sought to deliberately mislead its enquiry — pointing to incorrect and/or inadequate responses from staffers who did testify.

“We are left with the impression that either [policy VP] Simon Milner and [CTO] Mike Schroepfer deliberately misled the Committee or they were deliberately not briefed by senior executives at Facebook about the extent of Russian interference in foreign elections,” it suggests.

In an unusual move late last year the committee used rare parliamentary powers to seize a cache of documents related to an active US lawsuit against Facebook filed by an app developer called Six4Three.

The cache of documents is referenced extensively in the final report, and appears to have fuelled antitrust concerns, with the committee arguing that the evidence obtained from the internal company documents “indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers… of that data, thereby causing them to lose their business”.

“It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission [privacy] settlement,” the committee also argues, citing evidence from the former chief technologist of the FTC, Ashkan Soltani .

On Soltani’s evidence, it writes:

Ashkan Soltani rejected [Facebook’s] claim, saying that up until 2012, platform controls did not exist, and privacy controls did not apply to apps. So even if a user set their profile to private, installed apps would still be able to access information. After 2012, Facebook added platform controls and made privacy controls applicable to apps. However, there were ‘whitelisted’ apps that could still access user data without permission and which, according to Ashkan Soltani, could access friends’ data for nearly a decade before that time. Apps were able to circumvent users’ privacy of platform settings and access friends’ information, even when the user disabled the Platform. This was an example of Facebook’s business model driving privacy violations.

While Facebook is singled out for the most eviscerating criticism in the report (and targeted for specific investigations), the committee’s long list of recommendations are addressed at social media businesses and online advertisers generally.

It also calls for far more transparency from platforms, writing that: “Social media companies need to be more transparent about their own sites, and how they work. Rather than hiding behind complex agreements, they should be informing users of how their sites work, including curation functions and the way in which algorithms are used to prioritise certain stories, news and videos, depending on each user’s profile. The more people know how the sites work, and how the sites use individuals’ data, the more informed we shall all be, which in turn will make choices about the use and privacy of sites easier to make.”

The committee also urges a raft of updates to UK election law — branding it “not fit for purpose” in the digital era.

Its interim report, published last summer, made many of the same recommendations.

Russian interest

But despite pressing the government for urgent action there was only a cool response from ministers then, with the government remaining tied up trying to shape a response to the 2016 Brexit referendum vote which split the country (with social media’s election-law-deforming help). Instead it opted for a ‘wait and see‘ approach.

The government accepted just three of the preliminary report’s forty-two recommendations outright, and fully rejected four.

Nonetheless, the committee has doubled down on its preliminary conclusions, reiterating earlier recommendations and pushing the government once again to act.

It cites fresh evidence, including from additional testimony, as well as pointing to other reports (such as the recently published Cairncross Review) which it argues back up some of the conclusions reached. 

“Our inquiry over the last year has identified three big threats to our society. The challenge for the year ahead is to start to fix them; we cannot delay any longer,” writes Damian Collins MP and chair of the DCMS Committee, in a statement. “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use every day. Much of this is directed from agencies working in foreign countries, including Russia.

“The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights. Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers who rely on this platform to reach their customers.”

“These are issues that the major tech companies are well aware of, yet continually fail to address. The guiding principle of the ‘move fast and break things’ culture often seems to be that it is better to apologise than ask permission. We need a radical shift in the balance of power between the platforms and the people,” he added.

“The age of inadequate self-regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator.”

The committee says it expects the government to respond to its recommendations within two months — noting rather dryly: “We hope that this will be much more comprehensive, practical, and constructive than their response to the Interim Report, published in October 2018. Several of our recommendations were not substantively answered and there is now an urgent need for the Government to respond to them.”

It also makes a point of including an analysis of Internet traffic to the government’s own response to its preliminary report last year — in which it highlights a “high proportion” of online visitors hailing from Russian cities including Moscow and Saint Petersburg…

Source: Web and publications unit, House of Commons

“This itself demonstrates the very clear interest from Russia in what we have had to say about their activities in overseas political campaigns,” the committee remarks, criticizing the government response to its preliminary report for claiming there’s no evidence of “successful” Russian interference in UK elections and democratic processes.

“It is surely a sufficient matter of concern that the Government has acknowledged that interference has occurred, irrespective of the lack of evidence of impact. The Government should be conducting analysis to understand the extent of Russian targeting of voters during elections,” it adds.

Three senior managers knew

Another interesting tidbit from the report is confirmation that the ICO has shared the names of three “senior managers” at Facebook who knew about the Cambridge Analytica data breach prior to the first press report in December 2015 — which is the date Facebook has repeatedly told the committee was when it first learnt of the breach, contradicting what the ICO found via its own investigations.

The committee’s report does not disclose the names of the three senior managers — saying the ICO has asked the names to remain confidential (we’ve reached out to the ICO to ask why it is not making this information public) — and implies the execs did not relay the information to Zuckerberg.

The committee dubs this as an example of “a profound failure” of internal governance, also branding it as evidence of “fundamental weakness” in how Facebook manages its responsibilities to users.

Here’s the committee’s account of that detail:

We were keen to know when and which people working at Facebook first knew about the GSR/Cambridge Analytica breach. The ICO confirmed, in correspondence with the Committee, that three “senior managers” were involved in email exchanges earlier in 2015 concerning the GSR breach before December 2015, when it was first reported by The Guardian. At the request of the ICO, we have agreed to keep the names confidential, but it would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case.

The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.

This report was updated with comment from Facebook and the UK government


Source: The Tech Crunch

Read More

Facebook will reveal who uploaded your contact info for ad targeting

Posted by on Feb 6, 2019 in Advertising Tech, Apps, Facebook, Facebook ads, Facebook Custom Audiences, facebook privacy, Mobile, Policy, TC | 0 comments

Facebook’s crack down on non-consensual ad targeting last year will finally produce results. In March, TechCrunch discovered Facebook planned to require advertisers to pledge that they had permission to upload someone’s phone number or email address for ad targeting. That tool debuted in June, though there was no verification process and Facebook just took businesses at their word despite the financial incentive to lie. In November, Facebook launched a way for ad agencies and marketing tech developers to specify who they were buying promotions “on behalf of.” Soon that information will finally be revealed to users.

Facebook’s new Custom Audiences transparency feature shows when your contact info was uploaded and by whom, and if it was shared between brands and partners

Facebook previously only revealed what brand was using your contact info for targeting, not who uploaded it or when

Starting February 28th, Facebook’s “Why am I seeing this?” button in the drop-down menu of feed posts will reveal more than the brand that paid for the ad, some biographical details they targeted and if they’d uploaded your contact info. Facebook will start to show when your contact info was uploaded, if it was by the brand or one of their agency/developer partners and when access was shared between partners. A Facebook spokesperson tells me the goal is to keep giving people a better understanding of how advertisers use their information.

This new level of transparency could help users pinpoint what caused a brand to get hold of their contact info. That might help them change their behavior to stay more private. The system could also help Facebook zero in on agencies or partners that are constantly uploading contact info and might not have attained it legitimately. Apparently seeking not to dredge up old privacy problems, Facebook didn’t publish a blog post about the change but simply announced it in a Facebook post to the Facebook Advertiser Hub Page.

The move comes in the wake of Facebook attaching immediately visible “paid for by” labels to more political ads to defend against election interference. With so many users concerned about how Facebook exploits their data, the Custom Audiences transparency feature could provide a small boost of confidence in a time when people have little faith in the social network’s privacy practices.


Source: The Tech Crunch

Read More

Facebook plans new products as Instagram Stories hits 500M users/day

Posted by on Jan 30, 2019 in Advertising Tech, Apps, eCommerce, Facebook, Facebook Earnings, facebook groups, Facebook Q4 2018, Facebook Stories, instagram, Instagram Stories, Privacy, Social, TC | 0 comments

Roughly half of Instagram’s users 1 billion users now use Instagram Stories every day. That 500 million daily user count is up from 400 million in June 2018. 2 million advertisers are now buying Stories ads across Facebook’s properties.

CEO Mark Zuckerberg called Stories the last big game-changing feature from Facebook, but after concentrating on security last year, it plans to ship more products that make “major improvements” in people’s lives.

During today’s Q4 2018 earnings call, Zuckerberg outlined several areas where Facebook will push new products this year:

  • Encryption and ephemerality will be added to more features for security and privacy
  • Messaging features will make Messenger and WhatsApp “the center of [your] social experiences”
  • WhatsApp payments will expand to more countries
  • Stories will gain new private sharing options
  • Groups will become an organizing function of Facebook on par with friends & family
  • Facebook Watch will become mainstream this year as video is moved there from the News Feed, Zuckerberg expects
  • Augmented and virtual reality will be improved, and Oculus Quest will ship this spring
  • Instagram commerce and shopping will get new features

Zuckerberg was asked about Facebook’s plan to unify the infrastructure to allow encrypted cross-app messaging between Facebook Messenger, Instagram, and WhatsApp, as first reported by NYT’s Mike Isaac. Zuckerberg explained that the plan wasn’t about a business benefit, but supposedly to improve the user experience. Specifically, it would allow Marketplace buyers and sellers in countries where WhatsApp dominates messaging to use that app to chat instead of Messenger. And for Android users who use Messenger as their SMS client, the unification would allow those messages to be sent with encryption too. He sees expanding encryption here as a way to decentralize Facebook and keep users’ data safe by never having it on the company’s servers. However, Zuckerberg says this will take time and could be a “2020 thing”.

Facebook says it now has 2.7 billion monthly users across the Facebook family of apps: Facebook, Instagram, Messenger, and WhatsApp. However, Facebook CFO David Wehner says “Over time we expect family metrics to play the primary role in how we talk about our company and we will eventually phase out Facebook-only community metrics.” That shows Facebook is self-conscious about how its user base is shifting away from its classic social network and towards Instagram and its messaging apps. Family-only metrics could mask how teens are slipping away.


Source: The Tech Crunch

Read More

Knotch raises $25M to help marketers collect data about their content

Posted by on Jan 29, 2019 in Advertising Tech, knotch, new enterprise associates, Startups, Venture Capital | 0 comments

Knotch announced yesterday that it has raised $25 million in Series B funding.

The round was led by New Enterprise Associates, with NEA’s Hilarie Koplow-McAdams joining the Knotch board of directors. Rob Norman, the former chief digital officer of ad giant GroupM, is also joining the board.

“Brands have a desire to understand the effectiveness of their digital content across all channels, a gap that hadn’t been filled before Knotch,” Koplow-McAdams said in a statement. “Our conviction around the Knotch platform and team is driven by their impressive traction and comprehensive product offerings. We’re thrilled to partner with Knotch as they continue their growth trajectory, providing transformative marketing intelligence at scale.”

When we first wrote about Knotch back in 2012, it was a consumer product where people could share their opinions using a color scale. It might seem like a stretch go from that to marketing and data company, but in fact Knotch still collects data using its color-based feedback system — now, it’s using that system to ask consumers about their response to sponsored content.

In addition, Knotch offers a competitive intelligence product, as well as Blueprint, which helps marketers find the best publishers for their sponsored content.

Knotch screen shot

“As [brands are building] their own content hubs and recognizing content as a really key piece of their marketing stack, as they’re turning to this space, there’s not a lot of great options for them to turn to and say, ‘Here’s a way to know in advance which creative themes and topics and formats [are going to resonate]. Here’s how we optimize this content, here’s a way to benchmark what you’re doing,” founder and CEO Anda Gansca told me.

And it sounds like Gansca’s vision goes beyond sponsored content.

“In this convoluted landscape, you need a partner that is going to be your Switzerland of data, who’s aligned with you, collecting transparent digital performance data across paid and own channels,” she said.

Knotch has now raised a total of $34 million. Customers include JP Morgan Chase, AT&T, Ally Bank, Ford, Calvin Klein and Salesforce.


Source: The Tech Crunch

Read More

Facebook, Google and Twitter told to do more to fight fake news ahead of European elections

Posted by on Jan 29, 2019 in Advertising Tech, Artificial Intelligence, Brussels, disinformation, dublin, Europe, European Commission, european parliament, European Union, Facebook, law enforcement, Mariya Gabriel, media literacy, Nick Clegg, online disinformation, Policy, rt, search engine, Singapore, Social, Social Media, social network, Software, spokesperson, The Guardian, Twitter | 0 comments

A first batch of monthly progress reports from tech giants and advertising companies on what they’re doing to help fight online disinformation have been published by the European Commission.

Platforms including Facebook, Google and Twitter signed up to a voluntary EU code of practice on the issue last year.

The first reports cover measures taken by platforms up to December 31, 2018.

The implementation reports are intended to detail progress towards the goal of putting the squeeze on disinformation — such as by proactively identifying and removing fake accounts — but the European Commission has today called for tech firms to intensify their efforts, warning that more needs to be done in the run up to the 2019 European Parliament elections, which take place in May.

The Commission announced a multi-pronged action plan on disinformation two months ago, urging greater co-ordination on the issue between EU Member States and pushing for efforts to raise awareness and encourage critical thinking among the region’s people.

But it also heaped pressure on tech companies, especially, warning it wanted to see rapid action and progress.

A month on and it sounds less than impressed with tech giants’ ‘progress’ on the issue.

Mozilla also signed up to the voluntary Code of Practice, and all the signatories committed to take broad-brush action to try to combat disinformation.

Although, as we reported at the time, the code suffered from a failure to nail down terms and requirements — suggesting not only that measuring progress would be tricky but that progress itself might prove an elusive and slippery animal.

The first response certainly looks to be a mixed bag. Which is perhaps expected given the overarching difficulty of attacking a complex and multi-faceted problem like disinformation quickly.

Though there’s also little doubt that opaque platforms used to getting their own way with data and content are going to be dragged kicking and screaming towards greater transparency. Hence it suits their purpose to be able to produce multi-page chronicles of ‘steps taken’, which allows them to project an aura of action — while continuing to indulge in their preferred foot-drag.

The Guardian reports especially critical comments made by the Commission vis-a-vis Facebook’s response, for example — with Julian King saying at today’s press conference that the company still hasn’t given independent researchers access to its data.

“We need to do something about that,” he added.

Here’s the Commission’s brief rundown of what’s been done by tech firms but with emphasis firmly placed on what’s yet to be done:

  • Facebook has taken or is taking measures towards the implementation of all of the commitments but now needs to provide greater clarity on how the social network will deploy its consumer empowerment tools and boost cooperation with fact-checkers and the research community across the whole EU.
  • Google has taken steps to implement all its commitments, in particular those designed to improve the scrutiny of ad placements, transparency of political advertisement and providing users with information, tools and support to empower them in their online experience. However some tools are only available in a small number of Member States. The Commission also calls on the online search engine to support research actions on a wider scale.
  • Twitter has prioritised actions against malicious actors, closing fake or suspicious accounts and automated systems/bots. Still, more information is needed on how this will restrict persistent purveyors of disinformation from promoting their tweets.
  • Mozilla is about to launch an upgraded version of its browser to block cross-site tracking by default but the online browser should be more concrete on how this will limit the information revealed about users’ browsing activities, which could potentially be used for disinformation campaigns.

Commenting in a statement, Mariya Gabriel, commissioner for digital economy and society, said: “Today’s reports rightly focus on urgent actions, such as taking down fake accounts. It is a good start. Now I expect the signatories to intensify their monitoring and reporting and increase their cooperation with fact-checkers and research community. We need to ensure our citizens’ access to quality and objective information allowing them to make informed choices.”

Strip out the diplomatic fillip and the message boils down to: Must do better, fast.

All of which explains why Facebook got out ahead of the Commission’s publication of the reports by putting its fresh-in-post European politician turned head of global comms, Nick Clegg, on a podium in Brussels yesterday — in an attempt to control the PR message about what it’s doing (or rather not doing, as the EC sees it) to boot fake activity into touch.

Clegg (re)announced more controls around the placement of political ads, and said Facebook would set up new human-staffed operations centers — in Dublin and Singapore — to monitor how localised political news is distributed on its network.

Although the centers won’t launch until March. So, again, not something Facebook has done.

The staged press event with Clegg making his maiden public speech for his new employer may have backfired a bit because he managed to be incredibly boring. Although making a hot button political issue as tedious as possible is probably a key Facebook strategy.

Anything to drain public outrage to make the real policymakers go away.

(The Commission’s brandished stick remains that if it doesn’t see enough voluntary progress from platforms, via the Code, is to say it could move towards regulating to tackle disinformation.)

Advertising groups are also signed up to the voluntary code. And the World Federation of Advertisers (WFA), European Association of Communication Agencies and Interactive Advertising Bureau Europe have also submitted reports today.

In its report, the WFA writes that the issue of disinformation has been incorporated into its Global Media Charter, which it says identifies “key issues within the digital advertising ecosystem”, as its members see it. It adds that the charter makes the following two obligation statements:

We [advertisers] understand that advertising can fuel and sustain sites which misuse and infringe upon Intellectual Property (IP) laws. Equally advertising revenue may be used to sustain sites responsible for ‘fake news’ content or ‘disinformation’. Advertisers commit to avoiding (and support their partners in the avoidance of) the funding of actors seeking to influence division or seeking to inflict reputational harm on business or society and politics at large through content that appears false and/or misleading.

While the Code of Practice doesn’t contain a great deal of quantifiable substance, some have read its tea-leaves as a sign that signatories are committing to bot detection and identification — by promising to “establish clear marking systems and rules for bots to ensure their activities cannot be confused with human interactions”.

But while Twitter has previously suggested it’s working on a system for badging bots on its platform (i.e. to help distinguish them from human users) nothing of the kind has yet seen the light of day as an actual Twitter feature. (The company is busy experimenting with other kinds of stuff.) So it looks like it also needs to provide more info on that front.

We reached out to the tech companies for comment on the Commission’s response to their implementation reports.

Google emailed us the following statement, attributed to Lie Junius, its director of public policy: 

Supporting elections in Europe and around the world is hugely important to us. We’ll continue to work in partnership with the EU through its Code of Practice on Disinformation, including by publishing regular reports about our work to prevent abuse, as well as with governments, law enforcement, others in our industry and the NGO community to strengthen protections around elections, protect users, and help combat disinformation.

A Twitter spokesperson also told us:

Disinformation is a societal problem and therefore requires a societal response. We continue to work closely with the European Commission to play our part in tackling it. We’ve formed a global partnership with UNESCO on media literacy, updated our fake accounts policy, and invested in better tools to proactively detect malicious activity. We’ve also provided users with more granular choices when reporting platform manipulation, including flagging a potentially fake account.

At the time of writing Facebook had not responded to a request for comment.


Source: The Tech Crunch

Read More