Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Targeted ads offer little extra value for online publishers, study suggests

Posted by on May 31, 2019 in Adtech, Advertising Tech, Alphabet, behavioral advertising, digital advertising, digital marketing, display advertising, Europe, Facebook, General Data Protection Regulation, IAB, Marketing, Media, Online Advertising, Privacy, programmatic advertising, Randall Rothenberg, Richard blumenthal, targeted advertising, United States | 0 comments

How much value do online publishers derive from behaviorally targeted advertising that uses privacy-hostile tracking technologies to determine which advert to show a website user?

A new piece of research suggests publishers make just 4% more vs if they were to serve a non-targeted ad.

It’s a finding that sheds suggestive light on why so many newsroom budgets are shrinking and journalists finding themselves out of work — even as adtech giants continue stuffing their coffers with massive profits.

Visit the average news website lousy with third party cookies (yes, we know, it’s true of TC too) and you’d be forgiven for thinking the publisher is also getting fat profits from the data creamed off their users as they plug into programmatic ad systems that trade info on Internet users’ browsing habits to determine the ad which gets displayed.

Yet while the online ad market is massive and growing — $88BN in revenues in the US in 2017, per IAB data, a 21% year-on-year increase — publishers are not the entities getting filthy rich off of their own content.

On the contrary, research in recent years has suggested that a large proportion of publishers are being squeezed by digital display advertising economics, with some 40% reporting either stagnant or shrinking ad revenue, per a 2015 Econsultancy study. (Hence, we can posit, the rise in publishers branching into subscriptions — TC’s own offering can be found here: Extra Crunch).

The lion’s share of value being created by digital advertising ends up in the coffers of adtech giants, Google and Facebook . Aka the adtech duopoly. In the US, the pair account for around 60% of digital ad market spending, per eMarketer — or circa $76.57BN.

Their annual revenues have mirrored overall growth in digital ad spend — rising from $74.9BN to $136.8BN, between 2015 and 2018, in the case of Google’s parent Alphabet; and $17.9BN to $55.8BN for Facebook. (While US online ad spend stepped up from $59.6BN to $107.5BN+ between 2015 and 2018.)

eMarketer projects 2019 will mark the first decline in the duopoly’s collective share. But not because publishers’ fortunes are suddenly set for a bonanza turnaround. Rather another tech giant — Amazon — has been growing its share of the digital ad market, and is expected to make what eMarketer dubs the start of “a small dent in the duopoly”.

Behavioral advertising — aka targeted ads — has come to dominate the online ad market, fuelled by platform dynamics encouraging a proliferation of tracking technologies and techniques in the unregulated background. And by, it seems, greater effectiveness from the perspective of online advertisers, as the paper notes. (“Despite measurement and attribution challenges… many studies seem to concur that targeted advertising is beneficial and effective for advertising firms.”

This has had the effect of squeezing out non-targeted display ads, such as those that rely on contextual factors to select the ad — e.g. the content being viewed, device type or location.

The latter are now the exception; a fall-back such as for when cookies have been blocked. (Albeit, one that veteran pro-privacy search engine, DuckDuckGo, has nonetheless turned into a profitable contextual ad business).

One 2017 study by IHS Markit, suggested that 86% of programmatic advertising in Europe was using behavioural data. While even a quarter (24%) of non-programmatic advertising was found to be using behavioural data, per its model. 

“In 2016, 90% of the digital display advertising market growth came from formats and processes that use behavioural data,” it observed, projecting growth of 106% for behaviourally targeted advertising between 2016 and 2020, and a decline of 63.6% for forms of digital advertising that don’t use such data.

The economic incentives to push behavioral advertising vs non-targeted ads look clear for dominant platforms that rely on amassing scale — across advertisers, other people’s eyeballs, content and behavioral data — to extract value from the Internet’s dispersed and diverse audience.

But the incentives for content producers to subject themselves — and their engaged communities of users — to these privacy-hostile economies of scale look a whole lot more fuzzy.

Concern about potential imbalances in the online ad market is also leading policymakers and regulators on both sides of the Atlantic to question the opacity of the market — and call for greater transparency.

A price on people tracking’s head

The new research, which will be presented at the Workshop on the Economics of Information Security conference in Boston next week, aims to contribute a new piece to this digital ad revenue puzzle by trying to quantify the value to a single publisher of choosing ads that are behaviorally targeted vs those that aren’t.

We’ve flagged the research before — when the findings were cited by one of the academics involved in the study at an FTC hearing — but the full paper has now been published.

It’s called Online Tracking and Publishers’ Revenues: An Empirical Analysis, and is co-authored by three academics: Veronica Marotta, an assistant professor in information and decision sciences at the Carlson School of Management, University of Minnesota; Vibhanshu Abhishek, associate professor of information systems at the Paul Merage School of Business, University California Irvine; and Alessandro Acquisti, professor of IT and public policy at Carnegie Mellon University.

“While the impact of targeted advertising on advertisers’ campaign effectiveness has been vastly documented, much less is known about the value generated by online tracking and targeting technologies for publishers – the websites that sell ad spaces,” the researchers write. “In fact, the conventional wisdom that publishers benefit too from behaviorally targeted advertising has rarely been scrutinized in academic studies.”

“As we briefly mention in the paper, notwithstanding claims about the shared benefits of online tracking and behaviorally targeting for multiple stakeholders (merchants, publishers, consumers, intermediaries…), there is a surprising paucity of empirical estimates of economic outcomes from independent researchers,”  Acquisti also tells us.

In fact, most of the estimates focus on the advertisers’ side of the market (for instance, there have been quite a few studies estimating the increase in click-through or conversion rates associated with targeted ads); much less is known about the publishers’ side of the market. So, going into the study, we were genuinely curious about what we may find, as there was little in terms of data that could anchor our predictions.

“We did have theoretical bases to make possible predictions, but those predictions could be quite antithetical. Under one story, targeting increases the value of the audience, which increases advertisers’ bids, which increases publishers’ revenues; under a different story, targeting decreases the ‘pool’ of audience interested in an ad, which decreases competition to display ads, which reduces advertisers’ bids, eventually reducing publishers’ revenues.”

For the study the researchers were provided with a data-set comprising “millions” of display ad transactions completed in a week across multiple online outlets owned by a single (unidentified) large publisher which operates websites in a range of verticals such as news, entertainment and fashion.

The data-set also included whether or not the site visitor’s cookie ID is available — enabling analysis of the price difference between behaviorally targeted and non-targeted ads. (The researchers used a statistical mechanism to control for systematic differences between users who impede cookies.)

As noted above, the top-line finding is only a very small gain for the publisher whose data they were analyzing — of around 4%. Or an average increase of $0.00008 per advertisement. 

It’s a finding that contrasts wildly with some of the loud yet unsubstantiated opinions which can be found being promulgated online — claiming the ‘vital necessity’ of behavorial ads to support publishers/journalism.

For example, this article, published earlier this month by a freelance journalist writing for The American Prospect, includes the claim that: “An online advertisement without a third-party cookie sells for just 2 percent of the cost of the same ad with the cookie.” Yet does not specify a source for the statistic it cites.

(The author told us the reference is to a 2018 speech made by Index Exchange’s Andrew Casale, when he suggested ad requests without a buyer ID receive 99% lower bids vs the same ad request with the identifier. She added that her conversations with people in the adtech industry had suggested a spread between a 99% and 97% decline in the value of an ad without a cookie, hence choosing a middle point.)

At the same time policymakers in the US now appear painfully aware how far behind Europe they are lagging where privacy regulation is concerned — and are fast dialling up their scrutiny of and verbal horror over how Internet users are tracked and profiled by adtech giants.

At a Senate Judiciary Committee hearing earlier this month — convened with the aim of “understanding the digital ad ecosystem and the impact of data privacy and competition policy” — the talk was not if to regulate big tech but how hard they must crack down on monopolistic ad giants.

“That’s what brings us here today. The lack of choice [for consumers to preserve their privacy online],” said senator Richard Blumenthal. “The excessive and extraordinary power of Google and Facebook and others who dominate the market is a fact of life. And so privacy protection is absolutely vital in the short run.”

The kind of “invasive surveillance” that the adtech industry systematically deploys is “something we would never tolerate from a government but Facebook and Google have the power of government never envisaged by our founders,” Blumenthal went on, before a few of the types of personal data that are sucked up and exploited by the adtech industrial surveillance complex: “Health, dating, location, finance, extremely personal details — offered to anyone with almost no restraint.”

Bearing that “invasive surveillance” in mind, a 4% publisher ‘premium’ for privacy-hostile ads vs adverts that are merely contextually served (and so don’t require pervasive tracking of web users) starts to look like a massive rip off — of both publisher brand and audience value, as well as Internet users’ rights and privacy.

Yes, targeted ads do appear to generate a small revenue increase, per the study. But as the researchers also point out that needs to be offset against the cost to publishers of complying with privacy regulations.

“If setting tracking cookies on visitors was cost free, the website would definitely be losing money. However, the widespread use of tracking cookies – and, more broadly, the practice of tracking users online – has been raising privacy concerns that have led to the adoption of stringent regulations, in particular in the European Union,” they write — going on to cite an estimate by the International Association of Privacy Professionals that Fortune’s Global 500 companies will spend around $7.8BN on compliant costs to meet the requirements of Europe’s General Data Protection Regulation (GDPR). 

Wider costs to systematically eroding online privacy are harder to put a value on for publishers. But should also be considered — whether it’s the costs to a brand reputation and user loyalty as a result of a publisher larding their sites with unwanted trackers; to wider societal costs — linked to the risks of data-fuelled manipulation and exploitation of vulnerable groups. Simply put, it’s not a good look.

Publishers may appear complicit in the asset stripping of their own content and audiences for what — per this study — seems only marginal gain, but the opacity of the adtech industry implies that most likely don’t realize exactly what kind of ‘deal’ they’re getting at the hands of the ad giants who grip them.

Which makes this research paper a very compelling read for the online publishing industry… and, well, a pretty awkward newsflash for anyone working in adtech.

 

While the study only provides a snapshot of ad market economics, as experienced by a single publisher, the glimpse it presents is distinctly different from the picture the adtech lobby has sought to paint, as it has ploughed money into arguing against privacy legislation — on the claimed grounds that ‘killing behavioural advertising would kill free online content’. 

Saying no more creepy ads might only marginally reduce publishers’ revenue doesn’t have quite the same doom-laden ring, clearly.

“In a nutshell, this study provides an initial data point on a portion of the advertising ecosystem over which claims had been made but little empirical verification was completed. The results highlight the need for more transparency over how the value generated by flows of data gets allocated to different stakeholders,” says Acquisti, summing up how the study should be read against the ad market as a whole.

Contacted for a response to the research, Randall Rothenberg, CEO of advertising business organization, the IAB, agreed that the digital supply chain is “too complex and too opaque” — and also expressed concern about how relatively little value generated by targeted ads is trickling down to publishers.

“One week’s worth of data from one unidentified publisher does not make for a projectible (sic) piece of research. Still, the study shows that targeted advertising creates immense value for brands — more than 90% of the unnamed publisher’s auctioned ads were sold with targeting attached, and advertisers were willing to pay a 60% premium for those ads. Yet very little of that value flowed to the publisher,” he told TechCrunch. “As IAB has been saying for a decade, the digital supply chain is too complex and too opaque, and this diversion of value is more proof that transparency is required so that publishers can benefit from the value they create.”

The research paper includes discussion of the limitations to the approach, as well as ideas for additional research work — such as looking at how the value of cookies changes depending on how much information they contain (on that they write of their initial findings: “Information seem to be very valuable (from the publisher’s perspective) when we compare cookies with very little information to cookies with some information; after a certain point, adding more information to a cookie does not seem to create additional value for the publisher”); and investigating how “the (un)availability of a cookie changes the competition in the auction” — to try to understand ad auction competition dynamics and the potential mechanisms at play.

“This is one new and hopefully useful data point, to which others must be added,” Acquisti also told us in concluding remarks. “The key to research work is incremental progress, with more studies progressively adding a clearer understanding of an issue, and we look forward to more research in this area.”

This report was updated with additional comment


Source: The Tech Crunch

Read More

Zuckerberg says breaking up Facebook “isn’t going to help”

Posted by on May 11, 2019 in Apps, Chris Hughes, Drama, Facebook, Government, Mark Zuckerberg, Nick Clegg, Policy, Privacy, Social, TC | 0 comments

With the look of someone betrayed, Facebook’s CEO has fired back at co-founder Chris Hughes and his brutal NYT op-ed calling for regulators to split up Facebook, Instagram, and WhatsApp. “When I read what he wrote, my main reaction was that what he’s proposing that we do isn’t going to do anything to help solve those issues. So I think that if what you care about is democracy and elections, then you want a company like us to be able to invest billions of dollars per year like we are in building up really advanced tools to fight election interference” Zuckerberg told France Info while in Paris to meet with French President Emmanuel Macron.

Zuckerberg’s argument boils down to the idea that Facebook’s specific problems with privacy, safety, misinformation, and speech won’t be directly addressed by breaking up the company, and that would instead actually hinder its efforts to safeguard its social networks. The Facebook family of apps would theoretically have fewer economies of scale when investing in safety technology like artificial intelligence to spot bots spreading voter suppression content.

Facebook’s co-founders (from left): Dustin Moskovitz, Chris Hughes, and Mark Zuckerberg

Hughes claims that “Mark’s power is unprecedented and un-American” and that Facebook’s rampant acquisitions and copying have made it so dominant that it deters competition. The call echoes other early execs like Facebook’s first president Sean Parker and growth chief Chamath Palihapitiya who’ve raised alarms about how the social network they built impacts society.

But Zuckerberg argues that Facebook’s size benefits the public. “Our budget for safety this year is bigger than the whole revenue of our company was when we went public earlier this decade. A lot of that is because we’ve been able to build a successful business that can now support that. You know, we invest more in safety than anyone in social media” Zuckerberg told journalist Laurent Delahousse.

The Facebook CEO’s comments were largely missed by the media, in part because the TV interview was heavily dubbed into French with no transcript. But written out here for the first time, his quotes offer a window into how deeply Zuckerberg dismisses Hughes’ claims. “Well [Hughes] was talking about a very specific idea of breaking up the company to solve some of the social issues that we face” Zuckerberg says before trying to decouple solutions from anti-trust regulation. “The way that I look at this is, there are real issues. There are real issues around harmful content and finding the right balance between expression and safety, for preventing election interference, on privacy.”

Claiming that a breakup “isn’t going to do anything to help” is a more unequivocal refutation of Hughes’ claim than that of Facebook VP of communications and former UK deputy Prime Minster Nick Clegg . He wrote in his own NYT op-ed today that “what matters is not size but rather the rights and interests of consumers, and our accountability to the governments and legislators who oversee commerce and communications . . . Big in itself isn’t bad. Success should not be penalized.”

Mark Zuckerberg and Chris Hughes

Something certainly must be done to protect consumers. Perhaps that’s a break up of Facebook. At the least, banning it from acquiring more social networks of sufficient scale so it couldn’t snatch another Instagram from its crib would be an expedient and attainable remedy.

But the sharpest point of Hughes’ op-ed was how he identified that users are trapped on Facebook. “Competition alone wouldn’t necessarily spur privacy protection — regulation is required to ensure accountability — but Facebook’s lock on the market guarantees that users can’t protest by moving to alternative platforms” he writes. After Cambridge Analytica “people did not leave the company’s platforms en masse. After all, where would they go?”

That’s why given critics’ call for competition and Zuckerberg’s own support for interoperability, a core tenet of regulation must be making it easier for users to switch from Facebook to another social network. As I’ll explore in an upcoming piece, until users can easily bring their friend connections or ‘social graph’ somewhere else, there’s little to compel Facebook to treat them better.


Source: The Tech Crunch

Read More

Chelsea Manning released from jail as grand jury expires

Posted by on May 10, 2019 in Chelsea manning, Privacy, TC | 0 comments

Chelsea Manning walked free today for the first time after spending two months in Virginia’s Alexandria Detention Center for refusing to cooperate with a grand jury probing her relationship with WikiLeaks. Gizmodo first reported news that Manning left the facility today.

Manning was found to be in contempt of court, remaining in custody until the Eastern District of Virginia grand jury expired. Before her release, Manning was issued another subpoena for a second grand jury for Thursday May 16.

“Chelsea will continue to refuse to answer questions, and will use every available legal defense to prove to District Judge Trenga that she has just cause for her refusal to give testimony,” her legal team shared in a blog post.

Manning has consistently signaled her ongoing unwillingness to cooperate with the federal grand jury. That makes it entirely possible that she could be returned to custody at the detention center next week when she appears for her latest subpoena.

“I don’t have anything to contribute to this, or any other grand jury,” Manning said last month. “While I miss home, they can continue to hold me in jail, with all the harmful consequences that brings. I will not give up.”


Source: The Tech Crunch

Read More

Chipotle customers are saying their accounts have been hacked

Posted by on Apr 17, 2019 in Apps, computer security, credential stuffing, data breach, data security, Food, Hack, multi-factor authentication, Password, Prevention, Privacy, Security, spokesperson | 0 comments

A stream of Chipotle customers have said their accounts have been hacked and are reporting fraudulent orders charged to their credit cards — sometimes totaling hundreds of dollars.

Customers have posted on several Reddit threads complaining of account breaches and many more have tweeted at @ChipotleTweets to alert the fast food giant of the problem. In most cases, orders were put through under a victim’s account and delivered to addresses often not even in the victim’s state.

Many of the customers TechCrunch spoke to in the past two days said they used their Chipotle account password on other sites. Chipotle spokesperson Laurie Schalow told TechCrunch that credential stuffing was to blame. Hackers take lists of usernames and passwords from other breached sites and brute-force their way into other accounts.

But several customers we spoke to said their password was unique to Chipotle. Another customer said they didn’t have an account but ordered through Chipotle’s guest checkout option.

Tweets from Chipotle customers. (Screenshot: TechCrunch)

When we asked Chipotle about this, Schalow said the company is “monitoring any possible account security issues of which we’re made aware and continue to have no indication of a breach of private data of our customers,” and reiterated that the company’s data points to credential stuffing.

It’s a similar set of complaints made by DoorDash customers last year, who said their accounts had been improperly accessed. DoorDash also blamed the account hacks on credential stuffing, but could not explain how some accounts were breached even when users told TechCrunch that they used a unique password on the site.

If credential stuffing is to blame for Chipotle account breaches, rolling out two-factor authentication would help prevent the automated login process — and, put an additional barrier between a hacker and a victim’s account.

But when asked if Chipotle has plans to roll out two-factor authentication to protect its customers going forward, spokesperson Schalow declined to comment. “We don’t discuss our security strategies.”

Chipotle reported a data breach in 2017 affecting its 2,250 restaurants. Hackers infected its point-of-sale devices with malware, scraping millions of payment cards from unsuspecting restaurant goers. More than a hundred fast food and restaurant chains were also affected by the same malware infections.

In August, three suspects said to be members of the FIN7 hacking and fraud group were charged with the credit card thefts.


Source: The Tech Crunch

Read More

Demanding privacy, and establishing trust, in digital health

Posted by on Mar 26, 2019 in Column, digital health, Health, Privacy | 0 comments

February’s Wall Street Journal report pulled back the curtain on just how much is at stake when individuals share their personal health information with health and fitness applications. Several of these apps were (perhaps unwittingly) sharing users’ personal health information via a Facebook SDK that was automatically feeding that data to the platform. In one fell swoop, multiple companies damaged trust with their users — perhaps irrevocably.

But the dangers in digital health aren’t limited to rogue SDKs; three days after the Facebook news broke, yet another large health system announced the personal information of more than 325,00 patients had been exposed. All this comes as big tech companies like Apple, IBM and Amazon begin to enter the same space, with plans for huge impact. But even these well-established names enter healthcare with a trust deficit; Rock Health’s 2018 National Consumer Health Survey found that just 11 percent of respondents said they’d be willing to share health data with tech companies.

As we move toward an increasingly digitized world of healthcare — and as early-stage companies and tech behemoths operate alongside one another in the space — how can all involved uphold their responsibilities, follow relevant laws and regulations and maintain the trust of patients and users when it comes to privacy? Companies operating under the highest standards in healthcare are expressly prohibited from monetizing users’ data; how will large tech brand names adapt their business models to act properly?

In order for the promise of digital health to be realized, companies will need to ensure their patients’ data is safe, secure and error-free. Beyond security, healthcare companies operating as providers must also maintain the confidentiality and privacy of that data. Doing so isn’t simply good practice; it’s an existential requirement for companies operating in this space. There is a baseline expectation — from users, and from employers and health plans working with digital health companies — of privacy being maintained.

The success of digital health companies will hinge on whether patients feel comfortable sharing the most intimate data they possess — their personal health information (PHI) — especially when they worry that data could impact their employment. Below are three things digital health companies would do well to keep in mind as they operate in the space.

Comply with — and inform — regulations

In 2018 alone, more than 6.1 million individuals were impacted by healthcare data breaches. Many have started to warn of the “data breach tsunami.” Complacency is no longer viable. The increasing frequency of data breaches should become a rallying cry. When it comes to PHI, protecting the privacy and security of patients and users must be a business imperative.

Patients want to focus on getting better, not having to constantly check their privacy settings.

Complying with regulations and requirements for protecting PHI requires a combination of robust privacy and security strategies. The Health Insurance Portability and Accountability Act (HIPAA) sets the baseline for patient data protection. For companies operating under HIPAA, responsibilities, obligations and opportunities become crystal clear. Federal laws and regulations prescribe privacy and security minimums, as well as the exact rules governing collection, storage and transfer of participant data. For health innovators, strong privacy practices and security controls are key to customer trust and to growth.

This also means that digital health companies must be active participants in shaping the regulations that govern their operations. This isn’t a call to hire as many lobbyists as possible to water down your responsibilities; it’s a demand to educate the state and federal policymakers who will be writing the rules of the road that govern your work for the next phase of healthcare. Informed policy that enables creative iteration while putting the needs of the patient at its center is imperative for the continued success of the entire industry. This is a space where regulations can be helpful in clearly identifying what not to do to be taken seriously — and operate properly — as a digital health company.

HIPAA or not: know your role

HIPAA applies to digital health companies — whether they contract as a vendor (a “business associate”) or a healthcare provider (a “covered entity”). Third-parties, especially those that handle PHI, have the potential of exposing health companies to data breaches and non-compliance. Any data breach suffered by a healthcare company will have serious consequences, including reputational damage, government investigations and monetary damages.

Once credibility has been tarnished, it takes significant time to rebuild trust among consumers. Fundamental to this is understanding the difference between operating in technology broadly versus in digital health, and ensuring that your organization is equipped with a deep understanding of all the ins-and-outs of HIPAA and health care data; patients want to focus on getting better, not having to constantly check their privacy settings.

Keep compliance at your core

The healthcare industry is already fraught with risk. New laws and market forces only add to the complexities. In order to reach full maturity, digital health companies need to invest, early, in information security experts who understand the intersection of medical devices, software and regulations. Senior leadership teams must empower these experts while staying engaged on best practices and the latest threats. This goes against the rapid growth mindset of venture-backed companies in other industries, but is critical when it comes to healthcare.

If you are handling patient data, hiring a legal and compliance team is a top priority. By implementing a privacy and compliance program, you’ll be better equipped to find and correct potential vulnerabilities, while reducing the chance of fraud, and promoting safe and quality care.

The responsibility to establish trust in digital health is on the most prominent actors in a rapidly growing space. Data and its proper application hold the keys to the evolution of healthcare. But we must never forget that patients and users are opting to share the most intimate data they have. We must meet that responsibility with the systems, personnel and maturity it deserves.


Source: The Tech Crunch

Read More

FTC tells ISPs to disclose exactly what information they collect on users and what it’s for

Posted by on Mar 26, 2019 in broadband providers, Federal Trade Commission, FTC, Government, isps, Mobile, Policy, Privacy | 0 comments

The Federal Trade Commission, in what could be considered a prelude to new regulatory action, has issued an order to several major internet service providers requiring them to share every detail of their data collection practices. The information could expose patterns of abuse or otherwise troubling data use against which the FTC — or states — may want to take action.

The letters requesting info (detailed below) went to Comcast, Google, T-Mobile and both the fixed and wireless sub-companies of Verizon and AT&T. These “represent a range of large and small ISPs, as well as fixed and mobile Internet providers,” an FTC spokesperson said. I’m not sure which is meant to be the small one, but welcome any information the agency can extract from any of them.

Since the Federal Communications Commission abdicated its role in enforcing consumer privacy at these ISPs when it and Congress allowed the Broadband Privacy Rule to be overturned, others have taken up the torch, notably California and even individual cities like Seattle. But for enterprises spanning the nation, national-level oversight is preferable to a patchwork approach, and so it may be that the FTC is preparing to take a stronger stance.

To be clear, the FTC already has consumer protection rules in place and could already go after an internet provider if it were found to be abusing the privacy of its users — you know, selling their location to anyone who asks or the like. (Still no action there, by the way.)

But the evolving media and telecom landscape, in which we see enormous companies devouring one another to best provide as many complementary services as possible, requires constant reevaluation. As the agency writes in a press release:

The FTC is initiating this study to better understand Internet service providers’ privacy practices in light of the evolution of telecommunications companies into vertically integrated platforms that also provide advertising-supported content.

Although the FTC is always extremely careful with its words, this statement gives a good idea of what they’re concerned about. If Verizon (our parent company’s parent company) wants to offer not just the connection you get on your phone, but the media you request, the ads you are served and the tracking you never heard of, it needs to show that these businesses are not somehow shirking rules behind the scenes.

For instance, if Verizon Wireless says it doesn’t collect or share information about what sites you visit, but the mysterious VZ Snooping Co (fictitious, I should add) scoops all that up and then sells it for peanuts to its sister company, that could amount to a deceptive practice. Of course it’s rarely that simple (though don’t rule it out), but the only way to be sure is to comprehensively question everyone involved and carefully compare the answers with real-world practices.

How else would we catch shady zero-rating practices, zombie cookies, backdoor deals or lip service to existing privacy laws? It takes a lot of poring over data and complaints by the detail-oriented folks at these regulatory bodies to find things out.

To that end, the letters to ISPs ask for a whole boatload of information on companies’ data practices. Here’s a summary:

  • Categories of personal information collected about consumers or devices, including purposes, methods and sources of collection
  • how the data has been or is being used
  • third parties that provide or are provided this data and what limitations are imposed thereupon
  • how such data is combined with other types of information and how long it is retained
  • internal policies and practices limiting access to this information by employees or service providers
  • any privacy assessments done to evaluate associated risks and policies
  • how data is aggregated, anonymized or deidentified (and how those terms are defined)
  • how aggregated data is used, shared, etc.
  • “any data maps, inventories, or other charts, schematics, or graphic depictions” of information collection and storage
  • total number of consumers who have “visited or otherwise viewed or interacted with” the privacy policy
  • whether consumers are given any choice in collection and retention of data, and what the default choices are
  • total number and percentage of users that have exercised such a choice, and what choices they made
  • whether consumers are incentivized to (or threatened into) opt into data collection and how those programs work
  • any process for allowing consumers to “access, correct, or delete” their personal information
  • data deletion and retention policies for such information

Substantial, right?

Needless to say, some of this information may not be particularly flattering to ISPs. If only 1 percent of consumers have ever chosen to share their information, for instance, that reflects badly on sharing it by default. And if data capable of being combined across categories or services to de-anonymize it, even potentially, that’s another major concern.

The FTC representative declined to comment on whether there would be any collaboration with the FCC on this endeavor, whether it was preliminary to any other action and whether it can or will independently verify the information provided by the ISPs contacted. That’s an important point, considering how poorly these same companies represented their coverage data to the FCC for its yearly broadband deployment report. A reality check would be welcome.

You can read the rest of the letter here (PDF).


Source: The Tech Crunch

Read More

Mozilla’s free password manager, Firefox Lockbox, launches on Android

Posted by on Mar 26, 2019 in android, android apps, Apps, firefox, Mozilla, password manager, Privacy, Security, web browser, Web browsers | 0 comments

Mozilla’s free password manager designed for users of the Firefox web browser is today officially arriving on Android. The standalone app, called Firefox Lockbox, offers a simple if a bit basic way for users to access from their mobile device their logins already stored in their Firefox browser.

The app is nowhere near as developed as password managers like 1Password, Dashlane, LastPass and others as it lacks common features like the ability to add, edit or delete passwords; suggest complex passwords; or alert you to potentially compromised passwords resulting from data breaches, among other things.

However, the app is free — and if you’re already using Firefox’s browser, it’s at the very least a more secure alternative to writing down your passwords in an unprotected notepad app, for example. And you can opt to enable Lockbox as an Autofill service on Android.

But the app is really just a companion to Firefox. The passwords in Lockbox securely sync to the app from the Firefox browser — they aren’t entered by hand. For security, the app can be locked with facial recognition or a fingerprint (depending on device support). The passwords are also encrypted in a way that doesn’t allow Mozilla to read your data, it explains in a FAQ.

Firefox Lockbox is now one of several projects Mozilla developed through its now-shuttered Test Flight program. Over a few years’ time, the program had allowed the organization to trial more experimental features — some of which made their way to official products, like the recently launched file-sharing app, Firefox Send.

Others in the program — including Firefox Color⁩⁨Side View⁩⁨Firefox Notes⁩⁨Price Tracker and ⁨Email Tabs⁩ — remain available, but are no longer actively developed beyond occasional maintenance releases. Mozilla’s current focus is on its suite of “privacy-first” solutions, not its other handy utilities.

According to Mozilla, Lockbox was downloaded more than 50,000 times on iOS ahead of today’s Android launch.

The Android version is a free download on Google Play.


Source: The Tech Crunch

Read More

Apple ad focuses on iPhone’s most marketable feature — privacy

Posted by on Mar 14, 2019 in Apple, computing, digital media, digital rights, Facebook, Hardware, human rights, identity management, iPhone, law, Mobile, Privacy, TC, terms of service, Tim Cook, United States | 0 comments

Apple is airing a new ad spot in primetime today. Focused on privacy, the spot is visually cued, with no dialog and a simple tagline: Privacy. That’s iPhone.

In a series of humorous vignettes, the message is driven home that sometimes you just want a little privacy. The spot has only one line of text otherwise, and it’s in keeping with Apple’s messaging on privacy over the long and short term. “If privacy matters in your life, it should matter to the phone your life is on.”

The spot will air tonight in primetime in the U.S. and extend through March Madness. It will then air in select other countries.

You’d have to be hiding under a rock not to have noticed Apple positioning privacy as a differentiating factor between itself and other companies. Beginning a few years ago, CEO Tim Cook began taking more and more public stances on what the company felt to be your “rights” to privacy on their platform and how that differed from other companies. The undercurrent being that Apple was able to take this stance because its first-party business relies on a relatively direct relationship with customers who purchase its hardware and, increasingly, its services.

This stands in contrast to the model of other tech giants like Google or Facebook that insert an interstitial layer of monetization strategy on top of that relationship in the forms of application of personal information about you (in somewhat anonymized fashion) to sell their platform to advertisers that in turn can sell to you better.

Turning the ethical high ground into a marketing strategy is not without its pitfalls, though, as Apple has discovered recently with a (now patched) high-profile FaceTime bug that allowed people to turn your phone into a listening device, Facebook’s manipulation of App Store permissions and the revelation that there was some long overdue house cleaning needed in its Enterprise Certificate program.

I did find it interesting that the iconography of the “Private Side” spot very, very closely associates the concepts of privacy and security. They are separate, but interrelated, obviously. This spot says these are one and the same. It’s hard to enforce privacy without security, of course, but in the mind of the public I think there is very little difference between the two.

The App Store itself, of course, still hosts apps from Google and Facebook among thousands of others that use personal data of yours in one form or another. Apple’s argument is that it protects the data you give to your phone aggressively by processing on the device, collecting minimal data, disconnecting that data from the user as much as possible and giving users as transparent a control interface as possible. All true. All far, far better efforts than the competition.

Still, there is room to run, I feel, when it comes to Apple adjudicating what should be considered a societal norm when it comes to the use of personal data on its platform. If it’s going to be the absolute arbiter of what flies on the world’s most profitable application marketplace, it might as well use that power to get a little more feisty with the bigcos (and littlecos) that make their living on our data.

I mention the issues Apple has had above not as a dig, though some might be inclined to view Apple integrating privacy with marketing as boldness bordering on hubris. I, personally, think there’s still a major difference between a company that has situational loss of privacy while having a systemic dedication to privacy and, well, most of the rest of the ecosystem which exists because they operate an “invasion of privacy as a service” business.

Basically, I think stating privacy is your mission is still supportable, even if you have bugs. But attempting to ignore that you host the data platforms that thrive on it is a tasty bit of prestidigitation.

But that might be a little too verbose as a tagline.


Source: The Tech Crunch

Read More

Telegram gets 3M new signups during Facebook apps’ outage

Posted by on Mar 14, 2019 in Apps, China, encryption, Europe, Facebook, instagram, internet censorship, Iran, messaging apps, messaging services, Moscow, Pavel Durov, Privacy, russia, Social, Social Media, Telegram, vk | 0 comments

Messaging platform Telegram claims to have had a surge in signups during a period of downtime for Facebook’s rival messaging services.

In a message sent to his Telegram channel, founder Pavel Durov’s just wrote: “I see 3 million new users signed up for Telegram within the last 24 hours.”

It’s probably not a coincidence that Facebook and its related family of apps went down for most of Wednesday, as we reported earlier. At the time of writing Instagram’s service has been officially confirmed restored. Unofficially Facebook also appears to be back online, at least here in Europe.

Durov doesn’t offer an explicit explanation for Telegram’s sudden spike in sign ups, but he does take a thinly veiled swipe at social networking giant Facebook — whose founder recently claimed he now plans to pivot the ad platform to ‘privacy’.

“Good,” adds Durov on his channel, welcoming Telegram’s 3M newbies. “We have true privacy and unlimited space for everyone.”

A contact at Telegram confirmed to TechCrunch that the Facebook apps’ downtime is the likely cause of its latest sign up spike, telling us: “These outages always drive new users.”

Though they also credited growth to “the mainstream overall increasing understanding about Facebook’s abusive attention harvesting practices”.

A year ago Telegram announced passing 200M monthly active users. Though the platform has faced restrictions and/or blocks in some markets (principally Russia and Iran, as well as China) — apparently for refusing government requests for encryption keys and/or user information.

In Durov’s home country of Russia the government is also now moving to tighten Internet restrictions via new legislation — and thousands of people took to the streets in Moscow and other Russian cities this weekend to protest at growing Internet censorship, per Reuters.

Such restrictions could increase demand for Telegram’s encrypted messaging service in the country as the app does appear to still be partially accessible there.

Durov, who famously left Russia in 2014 — stepping away from his home country and an earlier social network he founded (VK.com) because of his stance on free speech — has sought to thwart the Russian government’s Telegram blocks via legal and technical measures.

The Telegram messaging platform has of course also had its own issues with less political downtime too.

In a tweet last fall the company confirmed a server cluster had gone down, potentially affecting users in the Middle East, Africa and Europe. Although in that case the downtime only lasted a few hours.


Source: The Tech Crunch

Read More

Facebook won’t store data in countries with human rights violations — except Singapore

Posted by on Mar 13, 2019 in Amazon, Asia, Facebook, Government, human rights, Human Rights Watch, Privacy, Singapore | 0 comments

As soon as Mark Zuckerberg said in a lengthy 3,225-word blog post to not build data centers in countries with poor human rights, he had already broken his promise.

He chose to ignore Singapore, which the Facebook founder had only months earlier posted about, declaring the micro-state home to the company’s first data center in Asia to “serve everyone.”

Zuckerberg was clear: “As we build our infrastructure around the world, we’ve chosen not to build data centers in countries that have a track record of violating human rights like privacy or freedom of expression.”

If there are two things Singapore is known for, it’s that there’s no privacy nor freedom of expression.

For all its glitz and economic power, Singapore’s human rights record falls far below internationally recognized norms. The state, with a population of five million, consistently falls close to the bottom in worldwide rankings by rights groups for its oppressive laws against freedom of speech, expression and assembly and limited rights to privacy under its expanding surveillance system. Worse, the country is known for its horrendous treatment of those in the LGBTQ+ community, whose actions are heavily restricted and any public act or depiction is deemed criminal. And even the media are under close watch and often threatened with rebuke and defamation lawsuits by the government.

Reporters Without Borders said Singapore has an “intolerant government,” and Human Rights Watch called some of the country’s more restrictive laws “draconian.”

We brought these points up to Facebook, but the company doesn’t see Zuckerberg’s remarks as contradictory or hypocritical.

“Deciding where to locate a new data center is a multi-year process that considers dozens of different factors, including access to renewable energy, connectivity, and a strong local talent pool,” said Facebook spokesperson Jennifer Hakes. “An essential factor, however, is ensuring that we can protect any user data stored in the facility.”

“This was the key point that Mark Zuckerberg emphasized in his post last week,” said Hakes. “We looked at all these factors carefully in Singapore and determined that it was the right location for our first data center in Asia.”

It’s ironic that Facebook’s own platform has been a target for Singapore’s government to crack down on vocal opponents of the state. Jolovan Wham, an activist, was jailed after organizing a public assembly from a Facebook page. The assembly’s permit was denied, so he switched the venue to a Skype call.

When asked, Facebook declined to comment on what it considers unacceptable human rights by a country, only referring back to Zuckerberg’s post.

Singapore remains be an important hub for the tech industry and business — particularly for Western companies, which have thrown human rights to the wind even as they tout their commitment to privacy and free speech at home. Amazon, Microsoft, Google, DigitalOcean, Linode and OVH all have data centers in the micro-state.

But only one to date has made public commitments to not store data in countries with poor records on human rights.

Why has Facebook made an exception for Singapore? It’s a mystery to everyone but Mark Zuckerberg.


Source: The Tech Crunch

Read More