Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

A brief history of Uber’s bumpy road to an IPO

Posted by on May 10, 2019 in Alphabet, Anthony Levandowski, Arizona, California, carsharing, Colorado, Commuting, driver, Emil Michael, equal employment opportunity commission, executive, Federal Bureau of Investigation, Federal Trade Commission, Google, Lyft, pandodaily, Sarah Lacy, self-driving car, TC, transport, Travis Kalanick, Uber, Uber Startup, waymo | 0 comments

It’s been nine years since UberCab made its first appearance on the WordPress pages of this website. In the ensuing years, the startup has grown from an upstart looking to upend the taxi cab cartels, to a juggernaut that has its hands in every form of transportation and logistics service it can think of.

In the process, Uber has done some things that might give (and in fact has given) some shareholders pause.

From its first pitch deck to this historic public offering, TechCrunch has covered the über startup that has defined the post-financial-crisis era of consumer venture investing.

Here are some of the things that shouldn’t get swept into the dustbin of Uber’s history as the company makes its debut as a public company.

  • In 2014 Uber used a tool called “God View” to track the movements of passengers and shared those details publicly.At the time, the company was worth a cool $18.2 billion, and was already on the road to success (an almost pre-ordained journey given the company’s investors and capitalization), but even then, it could not get out of the way of its darker impulses.
  • A former executive of the company, Emil Michael, suggested that Uber should investigate journalists who were critical of the company and its business practices (including PandoDaily editor Sarah Lacy).
  • As it expanded internationally, Uber came under fire for lax hiring practices for its drivers. In India, the company was banned in New Delhi, after a convicted sex offender was arrested on suspicion of raping a female passenger.
  • Last year, the Equal Employment Opportunity Commission opened an investigation into the company for gender discrimination around hiring and salaries for women at the company. Uber’s problems with harassment were famously documented by former employee Susan Fowler in a blog post that helped spur a reckoning for the tech sector.
  • Uber has been forced to pay fines for its inability to keep passenger and driver information private. The company has agreed to 20 years of privacy audits and has paid a fine to settle a case that was opened by the Federal Trade Commission dating back to 2017.
  • While Uber was not found to be criminally liable in the death of an Arizona pedestrian that was struck and killed by a self-driving car from the company’s fleet, it remains the only company with an autonomous vehicle involved in the death of a pedestrian.
  • Beyond its problems with federal regulators, Uber has also had problems adhering to local laws. In Colorado, Uber was fined nearly $10 million for not adhering to the state’s requirements regarding background checks of its drivers.
  • Uber was also sued by other companies. Notably, it was involved in a lengthy and messy trade secret dispute with Alphabet’s onetime self-driving car unit, Waymo. That was for picking up former Waymo employee Anthony Levandowski and some know-how that the former Alphabet exec allegedly acquired improperly before heading out the door.
  • Uber even had dueling lawsuits going between and among its executives and major shareholders. When Travis Kalanick was ousted by the Uber board, the decision reverberated through its boardroom. As part of that battle for control, Benchmark, an early investor in Uber sued the company’s founder and former chief executive,  Travis Kalanick for fraud, breach of contract and breach of fiduciary duty.
  • Uber’s chief people officer, Liane Hornsey was forced to resign following a previously unreported investigation into her alleged systematic dismissals of racial discrimination complaints within Uber.
  • Lawsuits against the company not only dealt with its treatment of gender and race issues, but also for accessibility problems with the ride-hailing service. The company was sued for allegedly violating Title II of the Americans with Disabilities Act and the California Disabled Persons Act.
  • The ride-hailing service also isn’t free from legal woes in international markets. Earlier this year, the company paid around $3 million to settle charges that Uber had violated local laws by operating in the country illegally.
  • Finally, the company’s lax driver screening policies have led to multiple reports of assault by drivers of Uber passengers. Uber recently ended the policy of forcing those women to engage in mandatory arbitration proceedings to adjudicate those claims.
  • Not even the drivers who form the core of Uber’s service are happy with the company. On the eve of its public offering, a strike in cities across the country brought their complaints squarely in front of the company’s executive team right before the public offering, which was set to make them millions.


Source: The Tech Crunch

Read More

FTC tells ISPs to disclose exactly what information they collect on users and what it’s for

Posted by on Mar 26, 2019 in broadband providers, Federal Trade Commission, FTC, Government, isps, Mobile, Policy, Privacy | 0 comments

The Federal Trade Commission, in what could be considered a prelude to new regulatory action, has issued an order to several major internet service providers requiring them to share every detail of their data collection practices. The information could expose patterns of abuse or otherwise troubling data use against which the FTC — or states — may want to take action.

The letters requesting info (detailed below) went to Comcast, Google, T-Mobile and both the fixed and wireless sub-companies of Verizon and AT&T. These “represent a range of large and small ISPs, as well as fixed and mobile Internet providers,” an FTC spokesperson said. I’m not sure which is meant to be the small one, but welcome any information the agency can extract from any of them.

Since the Federal Communications Commission abdicated its role in enforcing consumer privacy at these ISPs when it and Congress allowed the Broadband Privacy Rule to be overturned, others have taken up the torch, notably California and even individual cities like Seattle. But for enterprises spanning the nation, national-level oversight is preferable to a patchwork approach, and so it may be that the FTC is preparing to take a stronger stance.

To be clear, the FTC already has consumer protection rules in place and could already go after an internet provider if it were found to be abusing the privacy of its users — you know, selling their location to anyone who asks or the like. (Still no action there, by the way.)

But the evolving media and telecom landscape, in which we see enormous companies devouring one another to best provide as many complementary services as possible, requires constant reevaluation. As the agency writes in a press release:

The FTC is initiating this study to better understand Internet service providers’ privacy practices in light of the evolution of telecommunications companies into vertically integrated platforms that also provide advertising-supported content.

Although the FTC is always extremely careful with its words, this statement gives a good idea of what they’re concerned about. If Verizon (our parent company’s parent company) wants to offer not just the connection you get on your phone, but the media you request, the ads you are served and the tracking you never heard of, it needs to show that these businesses are not somehow shirking rules behind the scenes.

For instance, if Verizon Wireless says it doesn’t collect or share information about what sites you visit, but the mysterious VZ Snooping Co (fictitious, I should add) scoops all that up and then sells it for peanuts to its sister company, that could amount to a deceptive practice. Of course it’s rarely that simple (though don’t rule it out), but the only way to be sure is to comprehensively question everyone involved and carefully compare the answers with real-world practices.

How else would we catch shady zero-rating practices, zombie cookies, backdoor deals or lip service to existing privacy laws? It takes a lot of poring over data and complaints by the detail-oriented folks at these regulatory bodies to find things out.

To that end, the letters to ISPs ask for a whole boatload of information on companies’ data practices. Here’s a summary:

  • Categories of personal information collected about consumers or devices, including purposes, methods and sources of collection
  • how the data has been or is being used
  • third parties that provide or are provided this data and what limitations are imposed thereupon
  • how such data is combined with other types of information and how long it is retained
  • internal policies and practices limiting access to this information by employees or service providers
  • any privacy assessments done to evaluate associated risks and policies
  • how data is aggregated, anonymized or deidentified (and how those terms are defined)
  • how aggregated data is used, shared, etc.
  • “any data maps, inventories, or other charts, schematics, or graphic depictions” of information collection and storage
  • total number of consumers who have “visited or otherwise viewed or interacted with” the privacy policy
  • whether consumers are given any choice in collection and retention of data, and what the default choices are
  • total number and percentage of users that have exercised such a choice, and what choices they made
  • whether consumers are incentivized to (or threatened into) opt into data collection and how those programs work
  • any process for allowing consumers to “access, correct, or delete” their personal information
  • data deletion and retention policies for such information

Substantial, right?

Needless to say, some of this information may not be particularly flattering to ISPs. If only 1 percent of consumers have ever chosen to share their information, for instance, that reflects badly on sharing it by default. And if data capable of being combined across categories or services to de-anonymize it, even potentially, that’s another major concern.

The FTC representative declined to comment on whether there would be any collaboration with the FCC on this endeavor, whether it was preliminary to any other action and whether it can or will independently verify the information provided by the ISPs contacted. That’s an important point, considering how poorly these same companies represented their coverage data to the FCC for its yearly broadband deployment report. A reality check would be welcome.

You can read the rest of the letter here (PDF).


Source: The Tech Crunch

Read More

Venture investors and startup execs say they don’t need Elizabeth Warren to defend them from big tech

Posted by on Mar 8, 2019 in Amazon, AT&T, ben narasin, chief technology officer, coinbase, Companies, economy, elizabeth warren, entrepreneurship, Facebook, Federal Trade Commission, Google, IBM, kara nortman, Los Angeles, Microsoft, new enterprise associates, Private Equity, Social Media, Startup company, TC, Technology, Technology Development, United States, upfront ventures, us government, venky ganesan, Venture Capital, Walmart, world wide web, zappos | 0 comments

Responding to Elizabeth Warren’s call to regulate and break up some of the nation’s largest technology companies, the venture capitalists that invest in technology companies are advising the presidential hopeful to move slowly and not break anything.

Warren’s plan called for regulators to be appointed to oversee the unwinding of several acquisitions that were critical to the development of the core technology that make Alphabet’s Google and the social media giant Facebook so profitable… and Zappos.

Warren also wanted regulation in place that would block companies making over $25 billion that operate as social media or search platforms or marketplaces from owning companies that also sell services on those marketplaces.

As a whole, venture capitalists viewing the policy were underwhelmed.

“As they say on Broadway, ‘you gotta have a gimmick’ and this is clearly Warren’s,” says Ben Narasin, an investor at one of the nation’s largest investment firms,” New Enterprise Associates, which has $18 billion in assets under management and has invested in consumer companies like Jet, an online and mobile retailer that competed with Amazon and was sold to Walmart for $3.3 billion.

“Decades ago, at the peak of Japanese growth as a technology competitor on the global stage, the US government sought to break up IBM . This is not a new model, and it makes no sense,” says Narasin. “We slow down our country, our economy and our ability to innovate when the government becomes excessively aggressive in efforts to break up technology companies, because they see them through a prior-decades lens, when they are operating in a future decade reality. This too shall pass.”

Balaji Sirinivasan, the chief technology officer of Coinbase, took to Twitter to offer his thoughts on the Warren plan. “If big companies like Google, Facebook and Amazon are prevented from acquiring startups, that actually reduces competition,” Sirinivasan writes.

“There are two separate issues here that are being conflated. One issue is do we need regulation on the full platform companies. And the answer is absolutely,” says Venky Ganesan, the managing director of Menlo Ventures. “These platforms have a huge impact on society at large and they have huge influence.”

But while the platforms need to be regulated, Ganesan says, Senator Warren’s approach is an exercise in overreach.

“That plan is like taking a bazooka to a knife fight. It’s overwhelming and it’s not commensurate with the issues,” Ganesan says. “I don’t think at the end of the day venture capital is worrying about competition from these big platform companies. [And] as the proposal is composed it would create more obstacles rather than less.”

Using Warren’s own example of the antitrust cases that were brought against companies like AT&T and Microsoft, is a good model for how to proceed, Ganesan says. “We want to have the technocrats at the FTC figure out the right way to bring balance.”

Kara Nortman, a partner with the Los Angeles-based firm Upfront Ventures, is also concerned about the potential unforeseen consequences of Warren’s proposals.

“The specifics of the policy as presented strike me as having potentially negative consequences for innovation, These companies are funding massive innovation initiatives in our country. They’re creating jobs and taking risks in areas of technology development where we could potentially fall behind other countries and wind up reducing our quality of life,” Nortman says. “We’re not seeing that innovation or initiative come from the government – or that support for encouraging immigration and by extension embracing the talented foreign entrepreneurs that could develop new technologies and businesses.”

Nortman sees the Warren announcement as an attempt to start a dialogue between government regulators and big technology companies.

“My hope is that this is the beginning of a dialogue that is constructive,” Nortman says. “And since Elizabeth Warren is a thoughtful policymaker this is likely the first salvo toward an engagement with the technology community to work collaboratively on issues that we all want to see solved and that some of us are dedicating our career in venture to help solving.”


Source: The Tech Crunch

Read More

FTC creates antitrust task force to monitor tech industry

Posted by on Feb 26, 2019 in antitrust, Federal Trade Commission, FTC, Government, monopoly, Policy, TC | 0 comments

The field of technology and the business practices within it tend to advance faster than regulators can keep up. But the FTC is making a concerted effort with a new 17-lawyer tech task force dedicated to ensuring “free and fair competition” and watching for anticompetitive conduct among technology companies.

This isn’t necessarily a precursor to some big action like breaking up a big company or imposing rules or anything like that. It seems to be more a recognition that the FTC needs to be ready to ascertain quickly and move decisively in tech matters, and a crack team of tech-savvy staff attorneys is the way to do it.

The Technology Task Force will live under the competition bureau within the FTC, the director of which, Bruce Hoffman, commented as follows in the agency’s announcement:

Technology markets, which are rapidly evolving and touch so many other sectors of the economy, raise distinct challenges for antitrust enforcement. By centralizing our expertise and attention, the new task force will be able to focus on these markets exclusively – ensuring they are operating pursuant to the antitrust laws, and taking action where they are not.

That it is under this bureau and not the bureau of consumer protection gives a good indicator of its purpose. This won’t be a way for the FTC to, for instance, more closely scrutinize Google or Facebook’s shady user data practices. That said, the lawyers are stated to have expertise in “advertising, social networking, mobile operating systems and apps, and platform businesses,” which I doubt they mention for no reason.

Instead it is likely to be more focused on investigating and reporting on potential anticompetitive practices that are the potential result of M&A deals or quasi-monopolies like Amazon and Facebook. The fascinating Amazon’s Antitrust Paradox paper from a while back noted all kinds of ways that a company slips through loopholes while performing actions that look, walk and talk like monopolistic ones.

But just what exactly constitutes such practices legally speaking is a matter of considerable debate. No doubt the lawyers and their tech fellows, with whom they will consult, will spend a great deal of time sifting through old cases and precedents and seeing what does and doesn’t apply. The team will be performing its own investigations of ongoing and completed mergers, and will also supply investigative services to other branches of the agency.

Essentially it’s an indication that the FTC will be taking tech antitrust more seriously going forward, and dedicating more and better organized resources to the task of monitoring the sector. That’s probably not the kind of thing big tech companies like to hear.

I’ve asked the agency some questions as far as markets they’ll be watching, behaviors they’re looking for and so on. I’ll update this post if I hear back.


Source: The Tech Crunch

Read More

UK parliament calls for antitrust, data abuse probe of Facebook

Posted by on Feb 18, 2019 in Advertising Tech, app developers, Artificial Intelligence, ashkan soltani, business model, Cambridge Analytica, competition law, data protection law, DCMS committee, election law, Europe, Facebook, Federal Trade Commission, GSR, information commissioner's office, Mark Zuckerberg, Mike Schroepfer, Moscow, Policy, Privacy, russia, Security, Social, Social Media, social media platforms, United Kingdom, United States | 0 comments

A final report by a British parliamentary committee which spent months last year investigating online political disinformation makes very uncomfortable reading for Facebook — with the company singled out for “disingenuous” and “bad faith” responses to democratic concerns about the misuse of people’s data.

In the report, published today, the committee has also called for Facebook’s use of user data to be investigated by the UK’s data watchdog.

In an evidence session to the committee late last year, the Information Commissioner’s Office (ICO) suggested Facebook needs to change its business model — warning the company risks burning user trust for good.

Last summer the ICO also called for an ethical pause of social media ads for election campaigning, warning of the risk of developing “a system of voter surveillance by default”.

Interrogating the distribution of ‘fake news’

The UK parliamentary enquiry looked into both Facebook’s own use of personal data to further its business interests, such as by providing access to users’ data to developers and advertisers in order to increase revenue and/or usage of its own platform; and examined what Facebook claimed as ‘abuse’ of its platform by the disgraced (and now defunct) political data company Cambridge Analytica — which in 2014 paid a developer with access to Facebook’s developer platform to extract information on millions of Facebook users in build voter profiles to try to influence elections.

The committee’s conclusion about Facebook’s business is a damning one with the company accused of operating a business model that’s predicated on selling abusive access to people’s data.

Far from Facebook acting against “sketchy” or “abusive” apps, of which action it has produced no evidence at all, it, in fact, worked with such apps as an intrinsic part of its business model,” the committee argues. This explains why it recruited the people who created them, such as Joseph Chancellor [the co-founder of GSR, the developer which sold Facebook user data to Cambridge Analytica]. Nothing in Facebook’s actions supports the statements of Mark Zuckerberg who, we believe, lapsed into “PR crisis mode”, when its real business model was exposed.

“This is just one example of the bad faith which we believe justifies governments holding a business such as Facebook at arms’ length. It seems clear to us that Facebook acts only when serious breaches become public. This is what happened in 2015 and 2018.”

“We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that ‘we’ve never sold anyone’s data” is simply untrue’,” the committee also concludes.

We’ve reached out to Facebook for comment on the committee’s report. Update: Facebook said it rejects all claims it breached data protection and competition laws.

In a statement attributed to UK public policy manager, Karim Palant, the company told us:

We share the Committee’s concerns about false news and election integrity and are pleased to have made a significant contribution to their investigation over the past 18 months, answering more than 700 questions and with four of our most senior executives giving evidence.

We are open to meaningful regulation and support the committee’s recommendation for electoral law reform. But we’re not waiting. We have already made substantial changes so that every political ad on Facebook has to be authorised, state who is paying for it and then is stored in a searchable archive for 7 years. No other channel for political advertising is as transparent and offers the tools that we do.

We also support effective privacy legislation that holds companies to high standards in their use of data and transparency for users.

While we still have more to do, we are not the same company we were a year ago. We have tripled the size of the team working to detect and protect users from bad content to 30,000 people and invested heavily in machine learning, artificial intelligence and computer vision technology to help prevent this type of abuse.

Last fall Facebook was issued the maximum possible fine under relevant UK data protection law for failing to safeguard user data from Cambridge Analytica saga. Although it is appealing the ICO’s penalty, claiming there’s no evidence UK users’ data got misused.

During the course of a multi-month enquiry last year investigating disinformation and fake news, the Digital, Culture, Media and Sport (DCMS) committee heard from 73 witnesses in 23 oral evidence sessions, as well as taking in 170 written submissions. In all the committee says it posed more than 4,350 questions.

Its wide-ranging, 110-page report makes detailed observations on a number of technologies and business practices across the social media, adtech and strategic communications space, and culminates in a long list of recommendations for policymakers and regulators — reiterating its call for tech platforms to be made legally liable for content.

Among the report’s main recommendations are:

  • clear legal liabilities for tech companies to act against “harmful or illegal content”, with the committee calling for a compulsory Code of Ethics overseen by a independent regulatory with statutory powers to obtain information from companies; instigate legal proceedings and issue (“large”) fines for non-compliance
  • privacy law protections to cover inferred data so that models used to make inferences about individuals are clearly regulated under UK data protection rules
  • a levy on tech companies operating in the UK to support enhanced regulation of such platforms
  • a call for the ICO to investigate Facebook’s platform practices and use of user data
  • a call for the Competition Markets Authority to comprehensively “audit” the online advertising ecosystem, and also to investigate whether Facebook specifically has engaged in anti-competitive practices
  • changes to UK election law to take account of digital campaigning, including “absolute transparency of online political campaigning” — including “full disclosure of the targeting used” — and more powers for the Electoral Commission
  • a call for a government review of covert digital influence campaigns by foreign actors (plus a review of legislation in the area to consider if it’s adequate) — including the committee urging the government to launch independent investigations of recent past elections to examine “foreign influence, disinformation, funding, voter manipulation, and the sharing of data, so that appropriate changes to the law can be made and lessons can be learnt for future elections and referenda”
  • a requirement on social media platforms to develop tools to distinguish between “quality journalism” and low quality content sources, and/or work with existing providers to make such services available to users

Among the areas the committee’s report covers off with detailed commentary are data use and targeting; advertising and political campaigning — including foreign influence; and digital literacy.

It argues that regulation is urgently needed to restore democratic accountability and “make sure the people stay in charge of the machines”.

“Protecting our data helps us secure the past, but protecting inferences and uses of Artificial Intelligence (AI) is what we will need to protect our future,” the committee warns.

Ministers are due to produce a White Paper on social media safety regulation this winter and the committee writes that it hopes its recommendations will inform government thinking.

“Much has been said about the coarsening of public debate, but when these factors are brought to bear directly in election campaigns then the very fabric of our democracy is threatened,” says the committee. “This situation is unlikely to change. What does need to change is the enforcement of greater transparency in the digital sphere, to ensure that we know the source of what we are reading, who has paid for it and why the information has been sent to us. We need to understand how the big tech companies work and what happens to our data.”

The report calls for tech companies to be regulated as a new category, “not necessarily either a ‘platform’ or a ‘publisher”, but which legally tightens their liability for harmful content published on their platforms.

Last month another UK parliamentary committee also urged the government to place a legal ‘duty of care’ on platforms to protect users under the age of 18. The government said then that it has not ruled out doing so.

We’ve reached out to the DCMS for a response to the latest committee report. Update: A department spokesperson told us:

The Government’s forthcoming White Paper on Online Harms will set out a new framework for ensuring disinformation is tackled effectively, while respecting freedom of expression and promoting innovation.

This week the Culture Secretary will travel to the United States to meet with tech giants including Google, Facebook, Twitter and Apple to discuss many of these issues.

We welcome this report’s contribution towards our work to tackle the increasing threat of disinformation and to make the UK the safest place to be online. We will respond in due course.

“Digital gangsters”

Competition concerns are also raised several times by the committee.

“Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law,” the DCMS committee writes, going on to urge the government to investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of its business practices towards other developers “to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail”. 

“The big tech companies must not be allowed to expand exponentially, without constraint or proper regulatory oversight,” it adds.

The committee suggests existing legal tools are up to the task of reining in platform power, citing privacy laws, data protection legislation, antitrust and competition law — and calling for a “comprehensive audit” of the social media advertising market by the UK’s Competition and Markets Authority, and a specific antitrust probe of Facebook’s business practices.

“If companies become monopolies they can be broken up, in whatever sector,” the committee points out. “Facebook’s handling of personal data, and its use for political campaigns, are prime and legitimate areas for inspection by regulators, and it should not be able to evade all editorial responsibility for the content shared by its users across its platforms.”

The social networking giant was the recipient of many awkward queries during the course of the committee’s enquiry but it refused repeated requests for its founder Mark Zuckerberg to testify — sending a number of lesser staffers in his stead.

That decision continues to be seized upon by the committee as evidence of a lack of democratic accountability. It also accuses Facebook of having an intentionally “opaque management structure”.

“By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world,” the committee writes.

“The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which—unsurprisingly—failed to address all of our questions. We are left in no doubt that this strategy was deliberate.”

It doubles down on the accusation that Facebook sought to deliberately mislead its enquiry — pointing to incorrect and/or inadequate responses from staffers who did testify.

“We are left with the impression that either [policy VP] Simon Milner and [CTO] Mike Schroepfer deliberately misled the Committee or they were deliberately not briefed by senior executives at Facebook about the extent of Russian interference in foreign elections,” it suggests.

In an unusual move late last year the committee used rare parliamentary powers to seize a cache of documents related to an active US lawsuit against Facebook filed by an app developer called Six4Three.

The cache of documents is referenced extensively in the final report, and appears to have fuelled antitrust concerns, with the committee arguing that the evidence obtained from the internal company documents “indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers… of that data, thereby causing them to lose their business”.

“It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission [privacy] settlement,” the committee also argues, citing evidence from the former chief technologist of the FTC, Ashkan Soltani .

On Soltani’s evidence, it writes:

Ashkan Soltani rejected [Facebook’s] claim, saying that up until 2012, platform controls did not exist, and privacy controls did not apply to apps. So even if a user set their profile to private, installed apps would still be able to access information. After 2012, Facebook added platform controls and made privacy controls applicable to apps. However, there were ‘whitelisted’ apps that could still access user data without permission and which, according to Ashkan Soltani, could access friends’ data for nearly a decade before that time. Apps were able to circumvent users’ privacy of platform settings and access friends’ information, even when the user disabled the Platform. This was an example of Facebook’s business model driving privacy violations.

While Facebook is singled out for the most eviscerating criticism in the report (and targeted for specific investigations), the committee’s long list of recommendations are addressed at social media businesses and online advertisers generally.

It also calls for far more transparency from platforms, writing that: “Social media companies need to be more transparent about their own sites, and how they work. Rather than hiding behind complex agreements, they should be informing users of how their sites work, including curation functions and the way in which algorithms are used to prioritise certain stories, news and videos, depending on each user’s profile. The more people know how the sites work, and how the sites use individuals’ data, the more informed we shall all be, which in turn will make choices about the use and privacy of sites easier to make.”

The committee also urges a raft of updates to UK election law — branding it “not fit for purpose” in the digital era.

Its interim report, published last summer, made many of the same recommendations.

Russian interest

But despite pressing the government for urgent action there was only a cool response from ministers then, with the government remaining tied up trying to shape a response to the 2016 Brexit referendum vote which split the country (with social media’s election-law-deforming help). Instead it opted for a ‘wait and see‘ approach.

The government accepted just three of the preliminary report’s forty-two recommendations outright, and fully rejected four.

Nonetheless, the committee has doubled down on its preliminary conclusions, reiterating earlier recommendations and pushing the government once again to act.

It cites fresh evidence, including from additional testimony, as well as pointing to other reports (such as the recently published Cairncross Review) which it argues back up some of the conclusions reached. 

“Our inquiry over the last year has identified three big threats to our society. The challenge for the year ahead is to start to fix them; we cannot delay any longer,” writes Damian Collins MP and chair of the DCMS Committee, in a statement. “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use every day. Much of this is directed from agencies working in foreign countries, including Russia.

“The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights. Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers who rely on this platform to reach their customers.”

“These are issues that the major tech companies are well aware of, yet continually fail to address. The guiding principle of the ‘move fast and break things’ culture often seems to be that it is better to apologise than ask permission. We need a radical shift in the balance of power between the platforms and the people,” he added.

“The age of inadequate self-regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator.”

The committee says it expects the government to respond to its recommendations within two months — noting rather dryly: “We hope that this will be much more comprehensive, practical, and constructive than their response to the Interim Report, published in October 2018. Several of our recommendations were not substantively answered and there is now an urgent need for the Government to respond to them.”

It also makes a point of including an analysis of Internet traffic to the government’s own response to its preliminary report last year — in which it highlights a “high proportion” of online visitors hailing from Russian cities including Moscow and Saint Petersburg…

Source: Web and publications unit, House of Commons

“This itself demonstrates the very clear interest from Russia in what we have had to say about their activities in overseas political campaigns,” the committee remarks, criticizing the government response to its preliminary report for claiming there’s no evidence of “successful” Russian interference in UK elections and democratic processes.

“It is surely a sufficient matter of concern that the Government has acknowledged that interference has occurred, irrespective of the lack of evidence of impact. The Government should be conducting analysis to understand the extent of Russian targeting of voters during elections,” it adds.

Three senior managers knew

Another interesting tidbit from the report is confirmation that the ICO has shared the names of three “senior managers” at Facebook who knew about the Cambridge Analytica data breach prior to the first press report in December 2015 — which is the date Facebook has repeatedly told the committee was when it first learnt of the breach, contradicting what the ICO found via its own investigations.

The committee’s report does not disclose the names of the three senior managers — saying the ICO has asked the names to remain confidential (we’ve reached out to the ICO to ask why it is not making this information public) — and implies the execs did not relay the information to Zuckerberg.

The committee dubs this as an example of “a profound failure” of internal governance, also branding it as evidence of “fundamental weakness” in how Facebook manages its responsibilities to users.

Here’s the committee’s account of that detail:

We were keen to know when and which people working at Facebook first knew about the GSR/Cambridge Analytica breach. The ICO confirmed, in correspondence with the Committee, that three “senior managers” were involved in email exchanges earlier in 2015 concerning the GSR breach before December 2015, when it was first reported by The Guardian. At the request of the ICO, we have agreed to keep the names confidential, but it would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case.

The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.

This report was updated with comment from Facebook and the UK government


Source: The Tech Crunch

Read More

Everything you need to know about Facebook, Google’s app scandal

Posted by on Feb 1, 2019 in app-store, Apple, Apple App Store, Apps, Europe, Facebook, Federal Trade Commission, Finance, General Data Protection Regulation, Google, messaging apps, mobile devices, operating systems, Privacy, Security, Smartphones, Social Media, Sonos, United States | 0 comments

Facebook and Google landed in hot water with Apple this week after two investigations by TechCrunch revealed the misuse of internal-only certificates — leading to their revocation, which led to a day of downtime at the two tech giants.

Confused about what happened? Here’s everything you need to know.

How did all this start, and what happened?

On Monday, we revealed that Facebook was misusing an Apple-issued enterprise certificate that is only meant for companies to use to distribute internal, employee-only apps without having to go through the Apple App Store. But the social media giant used that certificate to sign an app that Facebook distributed outside the company, violating Apple’s rules.

The app, known simply as “Research,” allowed Facebook unparalleled access to all of the data flowing out of a device. This included access to some of the users’ most sensitive network data. Facebook paid users — including teenagers — $20 per month to install the app. But it wasn’t clear exactly what kind of data was being vacuumed up, or for what reason.

It turns out that the app was a repackaged app that was effectively banned from Apple’s App Store last year for collecting too much data on users.

Apple was angry that Facebook was misusing its special-issue enterprise certificates to push an app it already banned, and revoked it — rendering the app unable to open. But Facebook was using that same certificate to sign its other employee-only apps, effectively knocking them offline until Apple re-issued the certificate.

Then, it turned out Google was doing almost exactly the same thing with its Screenwise app, and Apple’s ban-hammer fell again.

What’s the controversy over these enterprise certificates and what can they do?

If you want to develop Apple apps, you have to abide by its rules — and Apple expressly makes companies agree to its terms.

A key rule is that Apple doesn’t allow app developers to bypass the App Store, where every app is vetted to ensure it’s as secure as it can be. It does, however, grant exceptions for enterprise developers, such as to companies that want to build apps that are only used internally by employees. Facebook and Google in this case signed up to be enterprise developers and agreed to Apple’s developer terms.

Each Apple-issued certificate grants companies permission to distribute apps they develop internally — including pre-release versions of the apps they make, for testing purposes. But these certificates aren’t allowed to be used for ordinary consumers, as they have to download apps through the App Store.

What’s a “root” certificate, and why is its access a big deal?

Because Facebook’s Research and Google’s Screenwise apps were distributed outside of Apple’s App Store, it required users to manually install the app — known as sideloading. That requires users to go through a convoluted few steps of downloading the app itself, and opening and trusting either Facebook or Google’s enterprise developer code-signing certificate, which is what allows the app to run.

Both companies required users after the app installed to agree to an additional configuration step — known as a VPN configuration profile — allowing all of the data flowing out of that user’s phone to funnel down a special tunnel that directs it all to either Facebook or Google, depending on which app you installed.

This is where the Facebook and Google cases differ.

Google’s app collected data and sent it off to Google for research purposes, but couldn’t access encrypted data — such as the content of any network traffic protected by HTTPS, as most apps in the App Store and internet websites are.

Facebook, however, went far further. Its users were asked to go through an additional step to trust an additional type of certificate at the “root” level of the phone. Trusting this Facebook Research root certificate authority allowed the social media giant to look at all of the encrypted traffic flowing out of the device — essentially what we call a “man-in-the-middle” attack. That allowed Facebook to sift through your messages, your emails and any other bit of data that leaves your phone. Only apps that use certificate pinning — which reject any certificate that isn’t its own — were protected, such as iMessage, Signal and additionally any other end-to-end encrypted solutions.

Facebook’s Research app requires Root Certificate access, which Facebook gather almost any piece of data transmitted by your phone (Image: supplied)

Google’s app might not have been able to look at encrypted traffic, but the company still flouted the rules — and had its separate enterprise developer code-signing certificate revoked anyway.

What data did Facebook have access to on iOS?

It’s hard to know for sure, but it definitely had access to more data than Google.

Facebook said its app was to help it “understand how people use their mobile devices.” In reality, at root traffic level, Facebook could have accessed any kind of data that left your phone.

Will Strafach, a security expert with whom we spoke for our story, said: “If Facebook makes full use of the level of access they are given by asking users to install the certificate, they will have the ability to continuously collect the following types of data: private messages in social media apps, chats from in instant messaging apps – including photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location tracking apps you may have installed.”

Remember: this isn’t “root” access to your phone, like jailbreaking, but root access to the network traffic.

How does this compare to the technical ways other market research programs work?

In fairness, these aren’t market research apps unique to Facebook or Google. Several other companies, like Nielsen and comScore, run similar programs, but neither ask users to install a VPN or provide root access to the network.

In any case, Facebook already has a lot of your data — as does Google. Even if the companies only wanted to look at your data in aggregate with other people, it can still hone in on who you talk to, when, for how long and, in some cases, what about. It might not have been such an explosive scandal had Facebook not spent the last year cleaning up after several security and privacy breaches.

Can they capture the data of people the phone owner interacts with?

In both cases, yes. In Google’s case, any unencrypted data that involves another person’s data could have been collected. In Facebook’s case, it goes far further — any data of yours that interacts with another person, such as an email or a message, could have been collected by Facebook’s app.

How many people did this affect?

It’s hard to know for sure. Neither Google nor Facebook have said how many users they have. Between them, it’s believed to be in the thousands. As for the employees affected by the app outages, Facebook has more than 35,000 employees and Google has more than 94,000 employees.

Why did internal apps at Facebook and Google break after Apple revoked the certificates?

You might own your Apple device, but Apple still gets to control what goes on it.

Apple can’t control Facebook’s root certificates, but it can control the enterprise certificates it issues. After Facebook was caught out, Apple said: “Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.” That meant any app that relied on Facebook’s enterprise certificate — including inside the company — would fail to load. That’s not just pre-release builds of Facebook, Instagram and WhatsApp that staff were working on, but reportedly the company’s travel and collaboration apps were down. In Google’s case, even its catering and lunch menu apps were down.

Facebook’s internal apps were down for about a day, while Google’s internal apps were down for a few hours. None of Facebook or Google’s consumer services were affected, however.

How are people viewing Apple in all this?

Nobody seems thrilled with Facebook or Google at the moment, but not many are happy with Apple, either. Even though Apple sells hardware and doesn’t use your data to profile you or serve you ads — like Facebook and Google do — some are uncomfortable with how much power Apple has over the customers — and enterprises — that use its devices.

In revoking Facebook and Google’s enterprise certificates and causing downtime, it has a knock-on effect internally.

Is this legal in the U.S.? What about in Europe with GDPR?

Well, it’s not illegal — at least in the U.S. Facebook says it gained consent from its users. The company even said its teenage users must obtain parental consent, even though it was easily skippable and no verification checks were made. It wasn’t even explicitly clear that the children who “consented” really understood how much privacy they were really handing over.

That could lead to major regulatory headaches down the line. “If it turns out that European teens have been participating in the research effort Facebook could face another barrage of complaints under the bloc’s General Data Protection Regulation (GDPR) — and the prospect of substantial fines if any local agencies determine it failed to live up to consent and ‘privacy by design’ requirements baked into the bloc’s privacy regime,” wrote TechCrunch’s Natasha Lomas.

Who else has been misusing certificates?

Don’t think that Facebook and Google are alone in this. It turns out that a lot of companies might be flouting the rules, too.

According to many finding companies on social media, Sonos uses enterprise certificates for its beta program, as does finance app Binance, as well as DoorDash for its fleet of contractors. It’s not known if Apple will also revoke their enterprise certificates.

What next?

It’s anybody’s guess, but don’t expect this situation to die down any time soon.

Facebook may face repercussions with Europe, as well as at home. Two U.S. senators, Mark Warner and Richard Blumenthal, have already called for action, accusing Facebook of “wiretapping teens.” The Federal Trade Commission may also investigate, if Blumenthal gets his way.


Source: The Tech Crunch

Read More

You Should Have the Right to Sue Apple for Antitrust Violations

Posted by on Dec 12, 2018 in Antitrust Laws and Competition Issues, apple inc, Federal Trade Commission, illinois, iPhone, Justice Department, United States | 0 comments

In Apple v. Pepper, the Supreme Court will decide whether iPhone App Store customers are entitled to make their case against the tech giant.
Source: New York Times

Read More

The Monopolization of America

Posted by on Nov 26, 2018 in Antitrust Laws and Competition Issues, AT&T Inc, Brandeis, Louis D, Corporations, Democratic Party, Federal Trade Commission, Income Inequality, Klobuchar, Amy, Labor and Jobs, Mergers, Acquisitions and Divestitures, Republican Party, Standard Oil Co, Supreme Court (US), Taxation, United States Economy, United States Politics and Government | 0 comments

In one industry after another, big companies have become more dominant over the past 15 years, new data show.
Source: New York Times

Read More

In Twitter Purge, Top Accounts Lose Millions of Followers

Posted by on Jul 13, 2018 in Appointments and Executive Changes, Bieber, Justin, Celebrities, Computers and the Internet, Dorsey, Jack, Federal Trade Commission, Fleischer, Ari, Kagame, Paul, Kardashian, Kim, Kutcher, Ashton, New York Times, Obama, Barack, Rania, Queen of Jordan, Rwanda, Social Media, Twitter, United States Politics and Government, Winfrey, Oprah | 0 comments

Some popular figures — including Barack Obama, Ellen DeGeneres, Justin Bieber, Rihanna and Ashton Kutcher — lost a million or more in the crackdown.
Source: New York Times

Read More

Ending the Dead-End-Job Trap

Posted by on Jul 12, 2018 in Antitrust Laws and Competition Issues, Attorneys General, Booker, Cory A, Burger King Corp, Dunkin Donuts, Ellison, Keith, Fast Food Industry, Federal Trade Commission, Franchises, Justice Department, Krueger, Alan B, Labor and Jobs, Massachusetts, McDonald's Corporation, National Restaurant Assn, Pizza Hut, Restaurants, Silicon Valley (Calif), Wages and Salaries, Warren, Elizabeth | 0 comments

State attorneys general are cracking down on businesses that prohibit their franchisees from hiring workers away from one another.
Source: New York Times

Read More