Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Zuckerberg says breaking up Facebook “isn’t going to help”

Posted by on May 11, 2019 in Apps, Chris Hughes, Drama, Facebook, Government, Mark Zuckerberg, Nick Clegg, Policy, Privacy, Social, TC | 0 comments

With the look of someone betrayed, Facebook’s CEO has fired back at co-founder Chris Hughes and his brutal NYT op-ed calling for regulators to split up Facebook, Instagram, and WhatsApp. “When I read what he wrote, my main reaction was that what he’s proposing that we do isn’t going to do anything to help solve those issues. So I think that if what you care about is democracy and elections, then you want a company like us to be able to invest billions of dollars per year like we are in building up really advanced tools to fight election interference” Zuckerberg told France Info while in Paris to meet with French President Emmanuel Macron.

Zuckerberg’s argument boils down to the idea that Facebook’s specific problems with privacy, safety, misinformation, and speech won’t be directly addressed by breaking up the company, and that would instead actually hinder its efforts to safeguard its social networks. The Facebook family of apps would theoretically have fewer economies of scale when investing in safety technology like artificial intelligence to spot bots spreading voter suppression content.

Facebook’s co-founders (from left): Dustin Moskovitz, Chris Hughes, and Mark Zuckerberg

Hughes claims that “Mark’s power is unprecedented and un-American” and that Facebook’s rampant acquisitions and copying have made it so dominant that it deters competition. The call echoes other early execs like Facebook’s first president Sean Parker and growth chief Chamath Palihapitiya who’ve raised alarms about how the social network they built impacts society.

But Zuckerberg argues that Facebook’s size benefits the public. “Our budget for safety this year is bigger than the whole revenue of our company was when we went public earlier this decade. A lot of that is because we’ve been able to build a successful business that can now support that. You know, we invest more in safety than anyone in social media” Zuckerberg told journalist Laurent Delahousse.

The Facebook CEO’s comments were largely missed by the media, in part because the TV interview was heavily dubbed into French with no transcript. But written out here for the first time, his quotes offer a window into how deeply Zuckerberg dismisses Hughes’ claims. “Well [Hughes] was talking about a very specific idea of breaking up the company to solve some of the social issues that we face” Zuckerberg says before trying to decouple solutions from anti-trust regulation. “The way that I look at this is, there are real issues. There are real issues around harmful content and finding the right balance between expression and safety, for preventing election interference, on privacy.”

Claiming that a breakup “isn’t going to do anything to help” is a more unequivocal refutation of Hughes’ claim than that of Facebook VP of communications and former UK deputy Prime Minster Nick Clegg . He wrote in his own NYT op-ed today that “what matters is not size but rather the rights and interests of consumers, and our accountability to the governments and legislators who oversee commerce and communications . . . Big in itself isn’t bad. Success should not be penalized.”

Mark Zuckerberg and Chris Hughes

Something certainly must be done to protect consumers. Perhaps that’s a break up of Facebook. At the least, banning it from acquiring more social networks of sufficient scale so it couldn’t snatch another Instagram from its crib would be an expedient and attainable remedy.

But the sharpest point of Hughes’ op-ed was how he identified that users are trapped on Facebook. “Competition alone wouldn’t necessarily spur privacy protection — regulation is required to ensure accountability — but Facebook’s lock on the market guarantees that users can’t protest by moving to alternative platforms” he writes. After Cambridge Analytica “people did not leave the company’s platforms en masse. After all, where would they go?”

That’s why given critics’ call for competition and Zuckerberg’s own support for interoperability, a core tenet of regulation must be making it easier for users to switch from Facebook to another social network. As I’ll explore in an upcoming piece, until users can easily bring their friend connections or ‘social graph’ somewhere else, there’s little to compel Facebook to treat them better.


Source: The Tech Crunch

Read More

Three ‘new rules’ worth considering for the internet

Posted by on May 9, 2019 in Column, Internet of things, internet security, Mark Zuckerberg, Opinion, regulations | 0 comments

In a recent commentary, Facebook’s Mark Zuckerberg argues for new internet regulation starting in four areas: harmful content, election integrity, privacy and data portability. He also advocates that government and regulators “need a more active role” in this process. This call to action should be welcome news as the importance of the internet to nearly all aspects of people’s daily lives seems indisputable. However, Zuckerberg’s new rules could be expanded, as part of the follow-on discussion he calls for, to include several other necessary areas: security-by-design, net worthiness and updated internet business models.

Security-by-design should be an equal priority with functionality for network connected devices, systems and services which comprise the Internet of Things (IoT). One estimate suggests that the number of connected devices will reach 125 billion by 2030, and will increase 50% annually in the next 15 years. Each component on the IoT represents a possible insecurity and point of entry into the system. The Department of Homeland Security has developed strategic principles for securing the IoT. The first principle is to “incorporate security at the design phase.” This seems highly prudent and very timely, given the anticipated growth of the internet.

Ensuring net worthiness — that is, that our internet systems meet appropriate and up to date standards — seems another essential issue, one that might be addressed under Zuckerberg’s call for enhanced privacy. Today’s internet is a hodge-podge of different generations of digital equipment, unclear standards for what constitutes internet privacy and growing awareness of the likely scenarios that could threaten networks and user’s personal information.

Recent cyber incidents and concerns have illustrated these shortfalls. One need only look at the Office of Personnel Management (OPM) hack that exposed the private information of more than 22 million government civilian employees to see how older methods for storing information, lack of network monitoring tools and insecure network credentials resulted in a massive data theft. Many networks, including some supporting government systems and hospitals, are still running Windows XP software from the early 2000s. One estimate is that 5.5% of the 1.5 billion devices running Microsoft Windows are running XP, which is now “well past its end-of-life.” In 2016, a distributed denial of service attack against the web security firm Dyn exposed critical vulnerabilities in the IoT that may also need to be addressed.

Updated business models may also be required to address internet vulnerabilities. The internet has its roots as an information-sharing platform. Over time, a vast array of information and services have been made available to internet users through companies such as Twitter, Google and Facebook. And these services have been made available for modest and, in some cases, no cost to the user.

Regulation is necessary, but normally occurs only once potential for harm becomes apparent.

This means that these companies are expending their own resources to collect data and make it available to users. To defray the costs and turn a profit, the companies have taken to selling advertisements and user information. In turn, this means that private information is being shared with third parties.

As the future of the internet unfolds, it might be worth considering what people would be willing to pay for access to traffic cameras to aid commutes, social media information concerning friends or upcoming events, streaming video entertainment and unlimited data on demand. In fact, the data that is available to users has likely been compiled using a mix of publicly available and private data. Failure to revise the current business model will likely only encourage more of the same concerns with internet security and privacy issues. Finding new business models — perhaps even a fee-for-service for some high-end services — that would support a vibrant internet, while allowing companies to be profitable, could be a worthy goal.

Finally, Zuckerberg’s call for government and regulators to have a more active role is imperative, but likely will continue to be a challenge. As seen in attempts at regulating technologies such as transportation safety, offshore oil drilling and drones, such regulation is necessary, but normally occurs only once potential for harm becomes apparent. The recent accidents involving the Boeing 737 Max 8 aircraft could be seen as one example of the importance of such government regulation and oversight.

Zuckerberg’s call to action suggests a pathway to move toward a new and improved internet. Of course, as Zuckerberg also highlights, his four areas would only be a start, and a broader discussion should be had as well. Incorporating security-by-design, net worthiness and updated business models could be part of this follow-on discussion.


Source: The Tech Crunch

Read More

The “splinternet” is already here

Posted by on Mar 13, 2019 in alibaba, Asia, Baidu, belgium, Brussels, censorship, chief executive officer, China, Column, corbis, Dragonfly, Eric Schmidt, eu commission, Facebook, firewall, Getty-Images, Google, great firewall, Information technology, Internet, internet access, Iran, Mark Zuckerberg, net neutrality, North Korea, online freedom, open Internet, photographer, russia, Saudi Arabia, search engines, South Korea, Sundar Pichai, Syria, Tencent, United Kingdom, United Nations, United States, Washington D.C., world wide web | 0 comments

There is no question that the arrival of a fragmented and divided internet is now upon us. The “splinternet,” where cyberspace is controlled and regulated by different countries is no longer just a concept, but now a dangerous reality. With the future of the “World Wide Web” at stake, governments and advocates in support of a free and open internet have an obligation to stem the tide of authoritarian regimes isolating the web to control information and their populations.

Both China and Russia have been rapidly increasing their internet oversight, leading to increased digital authoritarianism. Earlier this month Russia announced a plan to disconnect the entire country from the internet to simulate an all-out cyberwar. And, last month China issued two new censorship rules, identifying 100 new categories of banned content and implementing mandatory reviews of all content posted on short video platforms.

While China and Russia may be two of the biggest internet disruptors, they are by no means the only ones. Cuban, Iranian and even Turkish politicians have begun pushing “information sovereignty,” a euphemism for replacing services provided by western internet companies with their own more limited but easier to control products. And a 2017 study found that numerous countries, including Saudi Arabia, Syria and Yemen have engaged in “substantial politically motivated filtering.”

This digital control has also spread beyond authoritarian regimes. Increasingly, there are more attempts to keep foreign nationals off certain web properties.

For example, digital content available to U.K. citizens via the BBC’s iPlayer is becoming increasingly unavailable to Germans. South Korea filters, censors and blocks news agencies belonging to North Korea. Never have so many governments, authoritarian and democratic, actively blocked internet access to their own nationals.

The consequences of the splinternet and digital authoritarianism stretch far beyond the populations of these individual countries.

Back in 2016, U.S. trade officials accused China’s Great Firewall of creating what foreign internet executives defined as a trade barrier. Through controlling the rules of the internet, the Chinese government has nurtured a trio of domestic internet giants, known as BAT (Baidu, Alibaba and Tencent), who are all in lock step with the government’s ultra-strict regime.

The super-apps that these internet giants produce, such as WeChat, are built for censorship. The result? According to former Google CEO Eric Schmidt, “the Chinese Firewall will lead to two distinct internets. The U.S. will dominate the western internet and China will dominate the internet for all of Asia.”

Surprisingly, U.S. companies are helping to facilitate this splinternet.

Google had spent decades attempting to break into the Chinese market but had difficulty coexisting with the Chinese government’s strict censorship and collection of data, so much so that in March 2010, Google chose to pull its search engines and other services out of China. However now, in 2019, Google has completely changed its tune.

Google has made censorship allowances through an entirely different Chinese internet platform called project Dragonfly . Dragonfly is a censored version of Google’s Western search platform, with the key difference being that it blocks results for sensitive public queries.

Sundar Pichai, chief executive officer of Google Inc., sits before the start of a House Judiciary Committee hearing in Washington, D.C., U.S., on Tuesday, Dec. 11, 2018. Pichai backed privacy legislation and denied the company is politically biased, according to a transcript of testimony he plans to deliver. Photographer: Andrew Harrer/Bloomberg via Getty Images

The Universal Declaration of Human Rights states that “people have the right to seek, receive, and impart information and ideas through any media and regardless of frontiers.”

Drafted in 1948, this declaration reflects the sentiment felt following World War II, when people worked to prevent authoritarian propaganda and censorship from ever taking hold the way it once did. And, while these words were written over 70 years ago, well before the age of the internet, this declaration challenges the very concept of the splinternet and the undemocratic digital boundaries we see developing today.

As the web becomes more splintered and information more controlled across the globe, we risk the deterioration of democratic systems, the corruption of free markets and further cyber misinformation campaigns. We must act now to save a free and open internet from censorship and international maneuvering before history is bound to repeat itself.

BRUSSELS, BELGIUM – MAY 22: An Avaaz activist attends an anti-Facebook demonstration with cardboard cutouts of Facebook chief Mark Zuckerberg, on which is written “Fix Fakebook”, in front of the Berlaymont, the EU Commission headquarter on May 22, 2018 in Brussels, Belgium. Avaaz.org is an international non-governmental cybermilitating organization, founded in 2007. Presenting itself as a “supranational democratic movement,” it says it empowers citizens around the world to mobilize on various international issues, such as human rights, corruption or poverty. (Photo by Thierry Monasse/Corbis via Getty Images)

The Ultimate Solution

Similar to the UDHR drafted in 1948, in 2016, the United Nations declared “online freedom” to be a fundamental human right that must be protected. While not legally binding, the motion passed with consensus, and therefore the UN was provided limited power to endorse an open internet (OI) system. Through selectively applying pressure on governments who are not compliant, the UN can now enforce digital human rights standards.

The first step would be to implement a transparent monitoring system which ensures that the full resources of the internet, and ability to operate on it, are easily accessible to all citizens. Countries such as North Korea, China, Iran and Syria, who block websites and filter email plus social media communication, would be encouraged to improve through the imposition of incentives and consequences.

All countries would be ranked on their achievement of multiple positive factors including open standards, lack of censorship, and low barriers to internet entry. A three tier open internet ranking system would divide all nations into Free, Partly Free or Not Free. The ultimate goal would be to have all countries gradually migrate towards the Free category, allowing all citizens full information across the WWW, equally free and open without constraints.

The second step would be for the UN to align itself much more closely with the largest western internet companies. Together they could jointly assemble detailed reports on each government’s efforts towards censorship creep and government overreach. The global tech companies are keenly aware of which specific countries are applying pressure for censorship and the restriction of digital speech. Together, the UN and global tech firms would prove strong adversaries, protecting the citizens of the world. Every individual in every country deserves to know what is truly happening in the world.

The Free countries with an open internet, zero undue regulation or censorship would have a clear path to tremendous economic prosperity. Countries who remain in the Not Free tier, attempting to impose their self-serving political and social values would find themselves completely isolated, visibly violating digital human rights law.

This is not a hollow threat. A completely closed off splinternet will inevitably lead a country to isolation, low growth rates, and stagnation.


Source: The Tech Crunch

Read More

Online platforms need a super regulator and public interest tests for mergers, says UK parliament report

Posted by on Mar 11, 2019 in antitrust, Artificial Intelligence, competition law, Europe, Facebook, GDPR, General Data Protection Regulation, Mark Zuckerberg, ofcom, online platforms, Policy, Privacy, Social, UK government, United Kingdom | 0 comments

The latest policy recommendations for regulating powerful Internet platforms comes from a U.K. House of Lord committee that’s calling for an overarching digital regulator to be set up to plug gaps in domestic legislation and work through any overlaps of rules.

“The digital world does not merely require more regulation but a different approach to regulation,” the committee writes in a report published on Saturday, saying the government has responded to “growing public concern” in a piecemeal fashion, whereas “a new framework for regulatory action is needed”.

It suggests a new body — which it’s dubbed the Digital Authority — be established to “instruct and coordinate regulators”.

“The Digital Authority would have the remit to continually assess regulation in the digital world and make recommendations on where additional powers are necessary to fill gaps,” the committee writes, saying that it would also “bring together non-statutory organisations with duties in this area” — so presumably bodies such as the recently created Centre for Data Ethics and Innovation (which is intended to advise the UK government on how it can harness technologies like AI for the public good).

The committee report sets out ten principles that it says the Digital Authority should use to “shape and frame” all Internet regulation — and develop a “comprehensive and holistic strategy” for regulating digital services.

These principles (listed below) read, rather unfortunately, like a list of big tech failures. Perhaps especially given Facebook founder Mark Zuckerberg’s repeat refusal to testify before another UK parliamentary committee last year. (Leading to another highly critical report.)

  • Parity: the same level of protection must be provided online as offline
  • Accountability: processes must be in place to ensure individuals and organisations are held to account for their actions and policies
  • Transparency: powerful businesses and organisations operating in the digital world must be open to scrutiny
  • Openness: the internet must remain open to innovation and competition
  • Privacy: to protect the privacy of individuals
  • Ethical design: services must act in the interests of users and society
  • Recognition of childhood: to protect the most vulnerable users of the internet
  • Respect for human rights and equality: to safeguard the freedoms of expression and information online
  • Education and awareness-raising: to enable people to navigate the digital world safely
  • Democratic accountability, proportionality and evidence-based approach

“Principles should guide the development of online services at every stage,” the committee urges, calling for greater transparency at the point data is collected; greater user choice over which data are taken; and greater transparency around data use — “including the use of algorithms”.

So, in other words, a reversal of the ‘opt-out if you want any privacy’ approach to settings that’s generally favored by tech giants — even as it’s being challenged by complaints filed under Europe’s GDPR.

The UK government is due to put out a policy White Paper on regulating online harms this winter. But the Lords Communications Committee suggests the government’s focus is too narrow, calling also for regulation that can intervene to address how “the digital world has become dominated by a small number of very large companies”.

“These companies enjoy a substantial advantage, operating with an unprecedented knowledge of users and other businesses,” it warns. “Without intervention the largest tech companies are likely to gain more control of technologies which disseminate media content, extract data from the home and individuals or make decisions affecting people’s lives.”

The committee recommends public interest tests should therefore be applied to potential acquisitions when tech giants move in to snap up startups, warning that current competition law is struggling to keep pace with the ‘winner takes all’ dynamic of digital markets and their network effects.

“The largest tech companies can buy start-up companies before they can become competitive,” it writes. “Responses based on competition law struggle to keep pace with digital markets and often take place only once irreversible damage is done. We recommend that the consumer welfare test needs to be broadened and a public interest test should be applied to data-driven mergers.”

Market concentration also means a small number of companies have “great power in society and act as gatekeepers to the internet”, it also warns, suggesting that while greater use of data portability can help, “more interoperability” is required for the measure to make an effective remedy.

The committee also examined online platforms’ current legal liabilities around content, and recommends beefing these up too — saying self-regulation is failing and calling out social media sites’ moderation processes specifically as “unacceptably opaque and slow”.

High level political pressure in the UK recently led to a major Instagram policy change around censoring content that promotes suicide — though the shift was triggered after a public outcry related to the suicide of a young schoolgirl who had been exposed to pro-suicide content on Instagram years before.

Like other UK committees and government advisors, the Lords committee wants online services which host user-generated content to be subject to a statutory duty of care — with a special focus on children and “the vulnerable in society”.

“The duty of care should ensure that providers take account of safety in designing their services to prevent harm. This should include providing appropriate moderation processes to handle complaints about content,” it writes, recommending telecoms regulator Ofcom is given responsibility for enforcement.

“Public opinion is growing increasingly intolerant of the abuses which big tech companies have failed to eliminate,” it adds. “We hope that the industry will welcome our 10 principles and their potential to help restore trust in the services they provide. It is in the industry’s own long-term interest to work constructively with policy-makers. If they fail to do so, they run the risk of further action being taken.”


Source: The Tech Crunch

Read More

Daily Crunch: Zuckerberg lays out his privacy vision

Posted by on Mar 7, 2019 in Daily Crunch, Facebook, Mark Zuckerberg, TC | 0 comments

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. Mark Zuckerberg discovers privacy

In a long post published yesterday, the Facebook CEO laid out his vision for making Facebook’s products more privacy-friendly. But can Facebook reform its 15-year legacy as devourer of all things private with a single sweeping manifesto?

Taylor Hatmaker has a simple answer: Heck no, of course it can’t. (Except she says it less politely.)

2. Huawei is suing the US government over ‘unconstitutional’ equipment ban

At the center of the suit is the company’s claim that Section 889 in the National Defense Authorization Act — which contains restrictions that prevent federal agencies from procuring Huawei equipment or services — is unconstitutional.

3. Trump called Apple’s CEO ‘Tim Apple’ by mistake

Actual quote: “You’ve really put a great investment in our country. We really appreciate it very much, Tim Apple.”

4. Google gives Android developers new tools to make money from users who won’t pay

“Rewarded Products” will allow non-paying app users to contribute to an app’s revenue stream by sacrificing their time, but not their money. The first product will be rewarded video, where users can opt to watch a video ad in exchange for in-game currency, virtual goods or other benefits.

5. Tesla’s new Supercharger slashes charging times

The V3 Supercharger, which was unveiled Wednesday, supports a peak rate of up to 250 kilowatts on the long-range version of the Model 3. At this rate, the V3 can add up to 75 miles of range in five minutes, Tesla said.

6. Bird launches platform to let entrepreneurs manage their own fleet of scooters

Bird Platform sells the vehicles to entrepreneurs at cost and then takes a 20 percent cut from the ride revenue. The program is launching in New Zealand, Canada and Latin America in the coming weeks.

7. Google brings its Duplex AI restaurant booking assistant to 43 states

Starting this week, Pixel 3 owners in 43 U.S. states will be able to use the company’s AI technology to book appointments at any restaurants that use booking services that partner with the Reserve with Google Program but don’t have an online system to complete the booking.


Source: The Tech Crunch

Read More

UK parliament calls for antitrust, data abuse probe of Facebook

Posted by on Feb 18, 2019 in Advertising Tech, app developers, Artificial Intelligence, ashkan soltani, business model, Cambridge Analytica, competition law, data protection law, DCMS committee, election law, Europe, Facebook, Federal Trade Commission, GSR, information commissioner's office, Mark Zuckerberg, Mike Schroepfer, Moscow, Policy, Privacy, russia, Security, Social, Social Media, social media platforms, United Kingdom, United States | 0 comments

A final report by a British parliamentary committee which spent months last year investigating online political disinformation makes very uncomfortable reading for Facebook — with the company singled out for “disingenuous” and “bad faith” responses to democratic concerns about the misuse of people’s data.

In the report, published today, the committee has also called for Facebook’s use of user data to be investigated by the UK’s data watchdog.

In an evidence session to the committee late last year, the Information Commissioner’s Office (ICO) suggested Facebook needs to change its business model — warning the company risks burning user trust for good.

Last summer the ICO also called for an ethical pause of social media ads for election campaigning, warning of the risk of developing “a system of voter surveillance by default”.

Interrogating the distribution of ‘fake news’

The UK parliamentary enquiry looked into both Facebook’s own use of personal data to further its business interests, such as by providing access to users’ data to developers and advertisers in order to increase revenue and/or usage of its own platform; and examined what Facebook claimed as ‘abuse’ of its platform by the disgraced (and now defunct) political data company Cambridge Analytica — which in 2014 paid a developer with access to Facebook’s developer platform to extract information on millions of Facebook users in build voter profiles to try to influence elections.

The committee’s conclusion about Facebook’s business is a damning one with the company accused of operating a business model that’s predicated on selling abusive access to people’s data.

Far from Facebook acting against “sketchy” or “abusive” apps, of which action it has produced no evidence at all, it, in fact, worked with such apps as an intrinsic part of its business model,” the committee argues. This explains why it recruited the people who created them, such as Joseph Chancellor [the co-founder of GSR, the developer which sold Facebook user data to Cambridge Analytica]. Nothing in Facebook’s actions supports the statements of Mark Zuckerberg who, we believe, lapsed into “PR crisis mode”, when its real business model was exposed.

“This is just one example of the bad faith which we believe justifies governments holding a business such as Facebook at arms’ length. It seems clear to us that Facebook acts only when serious breaches become public. This is what happened in 2015 and 2018.”

“We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that ‘we’ve never sold anyone’s data” is simply untrue’,” the committee also concludes.

We’ve reached out to Facebook for comment on the committee’s report. Update: Facebook said it rejects all claims it breached data protection and competition laws.

In a statement attributed to UK public policy manager, Karim Palant, the company told us:

We share the Committee’s concerns about false news and election integrity and are pleased to have made a significant contribution to their investigation over the past 18 months, answering more than 700 questions and with four of our most senior executives giving evidence.

We are open to meaningful regulation and support the committee’s recommendation for electoral law reform. But we’re not waiting. We have already made substantial changes so that every political ad on Facebook has to be authorised, state who is paying for it and then is stored in a searchable archive for 7 years. No other channel for political advertising is as transparent and offers the tools that we do.

We also support effective privacy legislation that holds companies to high standards in their use of data and transparency for users.

While we still have more to do, we are not the same company we were a year ago. We have tripled the size of the team working to detect and protect users from bad content to 30,000 people and invested heavily in machine learning, artificial intelligence and computer vision technology to help prevent this type of abuse.

Last fall Facebook was issued the maximum possible fine under relevant UK data protection law for failing to safeguard user data from Cambridge Analytica saga. Although it is appealing the ICO’s penalty, claiming there’s no evidence UK users’ data got misused.

During the course of a multi-month enquiry last year investigating disinformation and fake news, the Digital, Culture, Media and Sport (DCMS) committee heard from 73 witnesses in 23 oral evidence sessions, as well as taking in 170 written submissions. In all the committee says it posed more than 4,350 questions.

Its wide-ranging, 110-page report makes detailed observations on a number of technologies and business practices across the social media, adtech and strategic communications space, and culminates in a long list of recommendations for policymakers and regulators — reiterating its call for tech platforms to be made legally liable for content.

Among the report’s main recommendations are:

  • clear legal liabilities for tech companies to act against “harmful or illegal content”, with the committee calling for a compulsory Code of Ethics overseen by a independent regulatory with statutory powers to obtain information from companies; instigate legal proceedings and issue (“large”) fines for non-compliance
  • privacy law protections to cover inferred data so that models used to make inferences about individuals are clearly regulated under UK data protection rules
  • a levy on tech companies operating in the UK to support enhanced regulation of such platforms
  • a call for the ICO to investigate Facebook’s platform practices and use of user data
  • a call for the Competition Markets Authority to comprehensively “audit” the online advertising ecosystem, and also to investigate whether Facebook specifically has engaged in anti-competitive practices
  • changes to UK election law to take account of digital campaigning, including “absolute transparency of online political campaigning” — including “full disclosure of the targeting used” — and more powers for the Electoral Commission
  • a call for a government review of covert digital influence campaigns by foreign actors (plus a review of legislation in the area to consider if it’s adequate) — including the committee urging the government to launch independent investigations of recent past elections to examine “foreign influence, disinformation, funding, voter manipulation, and the sharing of data, so that appropriate changes to the law can be made and lessons can be learnt for future elections and referenda”
  • a requirement on social media platforms to develop tools to distinguish between “quality journalism” and low quality content sources, and/or work with existing providers to make such services available to users

Among the areas the committee’s report covers off with detailed commentary are data use and targeting; advertising and political campaigning — including foreign influence; and digital literacy.

It argues that regulation is urgently needed to restore democratic accountability and “make sure the people stay in charge of the machines”.

“Protecting our data helps us secure the past, but protecting inferences and uses of Artificial Intelligence (AI) is what we will need to protect our future,” the committee warns.

Ministers are due to produce a White Paper on social media safety regulation this winter and the committee writes that it hopes its recommendations will inform government thinking.

“Much has been said about the coarsening of public debate, but when these factors are brought to bear directly in election campaigns then the very fabric of our democracy is threatened,” says the committee. “This situation is unlikely to change. What does need to change is the enforcement of greater transparency in the digital sphere, to ensure that we know the source of what we are reading, who has paid for it and why the information has been sent to us. We need to understand how the big tech companies work and what happens to our data.”

The report calls for tech companies to be regulated as a new category, “not necessarily either a ‘platform’ or a ‘publisher”, but which legally tightens their liability for harmful content published on their platforms.

Last month another UK parliamentary committee also urged the government to place a legal ‘duty of care’ on platforms to protect users under the age of 18. The government said then that it has not ruled out doing so.

We’ve reached out to the DCMS for a response to the latest committee report. Update: A department spokesperson told us:

The Government’s forthcoming White Paper on Online Harms will set out a new framework for ensuring disinformation is tackled effectively, while respecting freedom of expression and promoting innovation.

This week the Culture Secretary will travel to the United States to meet with tech giants including Google, Facebook, Twitter and Apple to discuss many of these issues.

We welcome this report’s contribution towards our work to tackle the increasing threat of disinformation and to make the UK the safest place to be online. We will respond in due course.

“Digital gangsters”

Competition concerns are also raised several times by the committee.

“Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law,” the DCMS committee writes, going on to urge the government to investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of its business practices towards other developers “to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail”. 

“The big tech companies must not be allowed to expand exponentially, without constraint or proper regulatory oversight,” it adds.

The committee suggests existing legal tools are up to the task of reining in platform power, citing privacy laws, data protection legislation, antitrust and competition law — and calling for a “comprehensive audit” of the social media advertising market by the UK’s Competition and Markets Authority, and a specific antitrust probe of Facebook’s business practices.

“If companies become monopolies they can be broken up, in whatever sector,” the committee points out. “Facebook’s handling of personal data, and its use for political campaigns, are prime and legitimate areas for inspection by regulators, and it should not be able to evade all editorial responsibility for the content shared by its users across its platforms.”

The social networking giant was the recipient of many awkward queries during the course of the committee’s enquiry but it refused repeated requests for its founder Mark Zuckerberg to testify — sending a number of lesser staffers in his stead.

That decision continues to be seized upon by the committee as evidence of a lack of democratic accountability. It also accuses Facebook of having an intentionally “opaque management structure”.

“By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world,” the committee writes.

“The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which—unsurprisingly—failed to address all of our questions. We are left in no doubt that this strategy was deliberate.”

It doubles down on the accusation that Facebook sought to deliberately mislead its enquiry — pointing to incorrect and/or inadequate responses from staffers who did testify.

“We are left with the impression that either [policy VP] Simon Milner and [CTO] Mike Schroepfer deliberately misled the Committee or they were deliberately not briefed by senior executives at Facebook about the extent of Russian interference in foreign elections,” it suggests.

In an unusual move late last year the committee used rare parliamentary powers to seize a cache of documents related to an active US lawsuit against Facebook filed by an app developer called Six4Three.

The cache of documents is referenced extensively in the final report, and appears to have fuelled antitrust concerns, with the committee arguing that the evidence obtained from the internal company documents “indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers… of that data, thereby causing them to lose their business”.

“It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission [privacy] settlement,” the committee also argues, citing evidence from the former chief technologist of the FTC, Ashkan Soltani .

On Soltani’s evidence, it writes:

Ashkan Soltani rejected [Facebook’s] claim, saying that up until 2012, platform controls did not exist, and privacy controls did not apply to apps. So even if a user set their profile to private, installed apps would still be able to access information. After 2012, Facebook added platform controls and made privacy controls applicable to apps. However, there were ‘whitelisted’ apps that could still access user data without permission and which, according to Ashkan Soltani, could access friends’ data for nearly a decade before that time. Apps were able to circumvent users’ privacy of platform settings and access friends’ information, even when the user disabled the Platform. This was an example of Facebook’s business model driving privacy violations.

While Facebook is singled out for the most eviscerating criticism in the report (and targeted for specific investigations), the committee’s long list of recommendations are addressed at social media businesses and online advertisers generally.

It also calls for far more transparency from platforms, writing that: “Social media companies need to be more transparent about their own sites, and how they work. Rather than hiding behind complex agreements, they should be informing users of how their sites work, including curation functions and the way in which algorithms are used to prioritise certain stories, news and videos, depending on each user’s profile. The more people know how the sites work, and how the sites use individuals’ data, the more informed we shall all be, which in turn will make choices about the use and privacy of sites easier to make.”

The committee also urges a raft of updates to UK election law — branding it “not fit for purpose” in the digital era.

Its interim report, published last summer, made many of the same recommendations.

Russian interest

But despite pressing the government for urgent action there was only a cool response from ministers then, with the government remaining tied up trying to shape a response to the 2016 Brexit referendum vote which split the country (with social media’s election-law-deforming help). Instead it opted for a ‘wait and see‘ approach.

The government accepted just three of the preliminary report’s forty-two recommendations outright, and fully rejected four.

Nonetheless, the committee has doubled down on its preliminary conclusions, reiterating earlier recommendations and pushing the government once again to act.

It cites fresh evidence, including from additional testimony, as well as pointing to other reports (such as the recently published Cairncross Review) which it argues back up some of the conclusions reached. 

“Our inquiry over the last year has identified three big threats to our society. The challenge for the year ahead is to start to fix them; we cannot delay any longer,” writes Damian Collins MP and chair of the DCMS Committee, in a statement. “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use every day. Much of this is directed from agencies working in foreign countries, including Russia.

“The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights. Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers who rely on this platform to reach their customers.”

“These are issues that the major tech companies are well aware of, yet continually fail to address. The guiding principle of the ‘move fast and break things’ culture often seems to be that it is better to apologise than ask permission. We need a radical shift in the balance of power between the platforms and the people,” he added.

“The age of inadequate self-regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator.”

The committee says it expects the government to respond to its recommendations within two months — noting rather dryly: “We hope that this will be much more comprehensive, practical, and constructive than their response to the Interim Report, published in October 2018. Several of our recommendations were not substantively answered and there is now an urgent need for the Government to respond to them.”

It also makes a point of including an analysis of Internet traffic to the government’s own response to its preliminary report last year — in which it highlights a “high proportion” of online visitors hailing from Russian cities including Moscow and Saint Petersburg…

Source: Web and publications unit, House of Commons

“This itself demonstrates the very clear interest from Russia in what we have had to say about their activities in overseas political campaigns,” the committee remarks, criticizing the government response to its preliminary report for claiming there’s no evidence of “successful” Russian interference in UK elections and democratic processes.

“It is surely a sufficient matter of concern that the Government has acknowledged that interference has occurred, irrespective of the lack of evidence of impact. The Government should be conducting analysis to understand the extent of Russian targeting of voters during elections,” it adds.

Three senior managers knew

Another interesting tidbit from the report is confirmation that the ICO has shared the names of three “senior managers” at Facebook who knew about the Cambridge Analytica data breach prior to the first press report in December 2015 — which is the date Facebook has repeatedly told the committee was when it first learnt of the breach, contradicting what the ICO found via its own investigations.

The committee’s report does not disclose the names of the three senior managers — saying the ICO has asked the names to remain confidential (we’ve reached out to the ICO to ask why it is not making this information public) — and implies the execs did not relay the information to Zuckerberg.

The committee dubs this as an example of “a profound failure” of internal governance, also branding it as evidence of “fundamental weakness” in how Facebook manages its responsibilities to users.

Here’s the committee’s account of that detail:

We were keen to know when and which people working at Facebook first knew about the GSR/Cambridge Analytica breach. The ICO confirmed, in correspondence with the Committee, that three “senior managers” were involved in email exchanges earlier in 2015 concerning the GSR breach before December 2015, when it was first reported by The Guardian. At the request of the ICO, we have agreed to keep the names confidential, but it would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case.

The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.

This report was updated with comment from Facebook and the UK government


Source: The Tech Crunch

Read More

Fabula AI is using social spread to spot ‘fake news’

Posted by on Feb 6, 2019 in Amazon, api, Artificial Intelligence, deep learning, Emerging-Technologies, Europe, European Research Council, Facebook, fake news, Imperial College London, London, machine learning, Mark Zuckerberg, Media, MIT, Myanmar, Social, Social Media, social media platforms, social media regulation, social network, social networks, Startups, TC, United Kingdom | 0 comments

UK startup Fabula AI reckons it’s devised a way for artificial intelligence to help user generated content platforms get on top of the disinformation crisis that keeps rocking the world of social media with antisocial scandals.

Even Facebook’s Mark Zuckerberg has sounded a cautious note about AI technology’s capability to meet the complex, contextual, messy and inherently human challenge of correctly understanding every missive a social media user might send, well-intentioned or its nasty flip-side.

“It will take many years to fully develop these systems,” the Facebook founder wrote two years ago, in an open letter discussing the scale of the challenge of moderating content on platforms thick with billions of users. “This is technically difficult as it requires building AI that can read and understand news.”

But what if AI doesn’t need to read and understand news in order to detect whether it’s true or false?

Step forward Fabula, which has patented what it dubs a “new class” of machine learning algorithms to detect “fake news” — in the emergent field of “Geometric Deep Learning”; where the datasets to be studied are so large and complex that traditional machine learning techniques struggle to find purchase on this ‘non-Euclidean’ space.

The startup says its deep learning algorithms are, by contrast, capable of learning patterns on complex, distributed data sets like social networks. So it’s billing its technology as a breakthrough. (Its written a paper on the approach which can be downloaded here.)

It is, rather unfortunately, using the populist and now frowned upon badge “fake news” in its PR. But it says it’s intending this fuzzy umbrella to refer to both disinformation and misinformation. Which means maliciously minded and unintentional fakes. Or, to put it another way, a photoshopped fake photo or a genuine image spread in the wrong context.

The approach it’s taking to detecting disinformation relies not on algorithms parsing news content to try to identify malicious nonsense but instead looks at how such stuff spreads on social networks — and also therefore who is spreading it.

There are characteristic patterns to how ‘fake news’ spreads vs the genuine article, says Fabula co-founder and chief scientist, Michael Bronstein.

“We look at the way that the news spreads on the social network. And there is — I would say — a mounting amount of evidence that shows that fake news and real news spread differently,” he tells TechCrunch, pointing to a recent major study by MIT academics which found ‘fake news’ spreads differently vs bona fide content on Twitter.

“The essence of geometric deep learning is it can work with network-structured data. So here we can incorporate heterogenous data such as user characteristics; the social network interactions between users; the spread of the news itself; so many features that otherwise would be impossible to deal with under machine learning techniques,” he continues.

Bronstein, who is also a professor at Imperial College London, with a chair in machine learning and pattern recognition, likens the phenomenon Fabula’s machine learning classifier has learnt to spot to the way infectious disease spreads through a population.

“This is of course a very simplified model of how a disease spreads on the network. In this case network models relations or interactions between people. So in a sense you can think of news in this way,” he suggests. “There is evidence of polarization, there is evidence of confirmation bias. So, basically, there are what is called echo chambers that are formed in a social network that favor these behaviours.”

“We didn’t really go into — let’s say — the sociological or the psychological factors that probably explain why this happens. But there is some research that shows that fake news is akin to epidemics.”

The tl;dr of the MIT study, which examined a decade’s worth of tweets, was that not only does the truth spread slower but also that human beings themselves are implicated in accelerating disinformation. (So, yes, actual human beings are the problem.) Ergo, it’s not all bots doing all the heavy lifting of amplifying junk online.

The silver lining of what appears to be an unfortunate quirk of human nature is that a penchant for spreading nonsense may ultimately help give the stuff away — making a scalable AI-based tool for detecting ‘BS’ potentially not such a crazy pipe-dream.

Although, to be clear, Fabula’s AI remains in development at this stage, having been tested internally on Twitter data sub-sets at this stage. And the claims it’s making for its prototype model remain to be commercially tested with customers in the wild using the tech across different social platforms.

It’s hoping to get there this year, though, and intends to offer an API for platforms and publishers towards the end of this year. The AI classifier is intended to run in near real-time on a social network or other content platform, identifying BS.

Fabula envisages its own role, as the company behind the tech, as that of an open, decentralised “truth-risk scoring platform” — akin to a credit referencing agency just related to content, not cash.

Scoring comes into it because the AI generates a score for classifying content based on how confident it is it’s looking at a piece of fake vs true news.

A visualisation of a fake vs real news distribution pattern; users who predominantly share fake news are coloured red and users who don’t share fake news at all are coloured blue — which Fabula says shows the clear separation into distinct groups, and “the immediately recognisable difference in spread pattern of dissemination”.

In its own tests Fabula says its algorithms were able to identify 93 percent of “fake news” within hours of dissemination — which Bronstein claims is “significantly higher” than any other published method for detecting ‘fake news’. (Their accuracy figure uses a standard aggregate measurement of machine learning classification model performance, called ROC AUC.)

The dataset the team used to train their model is a subset of Twitter’s network — comprised of around 250,000 users and containing around 2.5 million “edges” (aka social connections).

For their training dataset Fabula relied on true/fake labels attached to news stories by third party fact checking NGOs, including Snopes and PolitiFact. And, overall, pulling together the dataset was a process of “many months”, according to Bronstein, He also says that around a thousand different stories were used to train the model, adding that the team is confident the approach works on small social networks, as well as Facebook-sized mega-nets.

Asked whether he’s sure the model hasn’t been trained to identified patterns caused by bot-based junk news spreaders, he says the training dataset included some registered (and thus verified ‘true’) users.

“There is multiple research that shows that bots didn’t play a significant amount [of a role in spreading fake news] because the amount of it was just a few percent. And bots can be quite easily detected,” he also suggests, adding: “Usually it’s based on some connectivity analysis or content analysis. With our methods we can also detect bots easily.”

To further check the model, the team tested its performance over time by training it on historical data and then using a different split of test data.

“While we see some drop in performance it is not dramatic. So the model ages well, basically. Up to something like a year the model can still be applied without any re-training,” he notes, while also saying that, when applied in practice, the model would be continually updated as it keeps digesting (ingesting?) new stories and social media content.

Somewhat terrifyingly, the model could also be used to predict virality, according to Bronstein — raising the dystopian prospect of the API being used for the opposite purpose to that which it’s intended: i.e. maliciously, by fake news purveyors, to further amp up their (anti)social spread.

“Potentially putting it into evil hands it might do harm,” Bronstein concedes. Though he takes a philosophical view on the hyper-powerful double-edged sword of AI technology, arguing such technologies will create an imperative for a rethinking of the news ecosystem by all stakeholders, as well as encouraging emphasis on user education and teaching critical thinking.

Let’s certainly hope so. And, on the educational front, Fabula is hoping its technology can play an important role — by spotlighting network-based cause and effect.

“People now like or retweet or basically spread information without thinking too much or the potential harm or damage they’re doing to everyone,” says Bronstein, pointing again to the infectious diseases analogy. “It’s like not vaccinating yourself or your children. If you think a little bit about what you’re spreading on a social network you might prevent an epidemic.”

So, tl;dr, think before you RT.

Returning to the accuracy rate of Fabula’s model, while ~93 per cent might sound pretty impressive, if it were applied to content on a massive social network like Facebook — which has some 2.3BN+ users, uploading what could be trillions of pieces of content daily — even a seven percent failure rate would still make for an awful lot of fakes slipping undetected through the AI’s net.

But Bronstein says the technology does not have to be used as a standalone moderation system. Rather he suggests it could be used in conjunction with other approaches such as content analysis, and thus function as another string on a wider ‘BS detector’s bow.

It could also, he suggests, further aid human content reviewers — to point them to potentially problematic content more quickly.

Depending on how the technology gets used he says it could do away with the need for independent third party fact-checking organizations altogether because the deep learning system can be adapted to different use cases.

Example use-cases he mentions include an entirely automated filter (i.e. with no human reviewer in the loop); or to power a content credibility ranking system that can down-weight dubious stories or even block them entirely; or for intermediate content screening to flag potential fake news for human attention.

Each of those scenarios would likely entail a different truth-risk confidence score. Though most — if not all — would still require some human back-up. If only to manage overarching ethical and legal considerations related to largely automated decisions. (Europe’s GDPR framework has some requirements on that front, for example.)

Facebook’s grave failures around moderating hate speech in Myanmar — which led to its own platform becoming a megaphone for terrible ethnical violence — were very clearly exacerbated by the fact it did not have enough reviewers who were able to understand (the many) local languages and dialects spoken in the country.

So if Fabula’s language-agnostic propagation and user focused approach proves to be as culturally universal as its makers hope, it might be able to raise flags faster than human brains which lack the necessary language skills and local knowledge to intelligently parse context.

“Of course we can incorporate content features but we don’t have to — we don’t want to,” says Bronstein. “The method can be made language independent. So it doesn’t matter whether the news are written in French, in English, in Italian. It is based on the way the news propagates on the network.”

Although he also concedes: “We have not done any geographic, localized studies.”

“Most of the news that we take are from PolitiFact so they somehow regard mainly the American political life but the Twitter users are global. So not all of them, for example, tweet in English. So we don’t yet take into account tweet content itself or their comments in the tweet — we are looking at the propagation features and the user features,” he continues.

“These will be obviously next steps but we hypothesis that it’s less language dependent. It might be somehow geographically varied. But these will be already second order details that might make the model more accurate. But, overall, currently we are not using any location-specific or geographic targeting for the model.

“But it will be an interesting thing to explore. So this is one of the things we’ll be looking into in the future.”

Fabula’s approach being tied to the spread (and the spreaders) of fake news certainly means there’s a raft of associated ethical considerations that any platform making use of its technology would need to be hyper sensitive to.

For instance, if platforms could suddenly identify and label a sub-set of users as ‘junk spreaders’ the next obvious question is how will they treat such people?

Would they penalize them with limits — or even a total block — on their power to socially share on the platform? And would that be ethical or fair given that not every sharer of fake news is maliciously intending to spread lies?

What if it turns out there’s a link between — let’s say — a lack of education and propensity to spread disinformation? As there can be a link between poverty and education… What then? Aren’t your savvy algorithmic content downweights risking exacerbating existing unfair societal divisions?

Bronstein agrees there are major ethical questions ahead when it comes to how a ‘fake news’ classifier gets used.

“Imagine that we find a strong correlation between the political affiliation of a user and this ‘credibility’ score. So for example we can tell with hyper-ability that if someone is a Trump supporter then he or she will be mainly spreading fake news. Of course such an algorithm would provide great accuracy but at least ethically it might be wrong,” he says when we ask about ethics.

He confirms Fabula is not using any kind of political affiliation information in its model at this point — but it’s all too easy to imagine this sort of classifier being used to surface (and even exploit) such links.

“What is very important in these problems is not only to be right — so it’s great of course that we’re able to quantify fake news with this accuracy of ~90 percent — but it must also be for the right reasons,” he adds.

The London-based startup was founded in April last year, though the academic research underpinning the algorithms has been in train for the past four years, according to Bronstein.

The patent for their method was filed in early 2016 and granted last July.

They’ve been funded by $500,000 in angel funding and about another $500,000 in total of European Research Council grants plus academic grants from tech giants Amazon, Google and Facebook, awarded via open research competition awards.

(Bronstein confirms the three companies have no active involvement in the business. Though doubtless Fabula is hoping to turn them into customers for its API down the line. But he says he can’t discuss any potential discussions it might be having with the platforms about using its tech.)

Focusing on spotting patterns in how content spreads as a detection mechanism does have one major and obvious drawback — in that it only works after the fact of (some) fake content spread. So this approach could never entirely stop disinformation in its tracks.

Though Fabula claims detection is possible within a relatively short time frame — of between two and 20 hours after content has been seeded onto a network.

“What we show is that this spread can be very short,” he says. “We looked at up to 24 hours and we’ve seen that just in a few hours… we can already make an accurate prediction. Basically it increases and slowly saturates. Let’s say after four or five hours we’re already about 90 per cent.”

“We never worked with anything that was lower than hours but we could look,” he continues. “It really depends on the news. Some news does not spread that fast. Even the most groundbreaking news do not spread extremely fast. If you look at the percentage of the spread of the news in the first hours you get maybe just a small fraction. The spreading is usually triggered by some important nodes in the social network. Users with many followers, tweeting or retweeting. So there are some key bottlenecks in the network that make something viral or not.”

A network-based approach to content moderation could also serve to further enhance the power and dominance of already hugely powerful content platforms — by making the networks themselves core to social media regulation, i.e. if pattern-spotting algorithms rely on key network components (such as graph structure) to function.

So you can certainly see why — even above a pressing business need — tech giants are at least interested in backing the academic research. Especially with politicians increasingly calling for online content platforms to be regulated like publishers.

At the same time, there are — what look like — some big potential positives to analyzing spread, rather than content, for content moderation purposes.

As noted above, the approach doesn’t require training the algorithms on different languages and (seemingly) cultural contexts — setting it apart from content-based disinformation detection systems. So if it proves as robust as claimed it should be more scalable.

Though, as Bronstein notes, the team have mostly used U.S. political news for training their initial classifier. So some cultural variations in how people spread and react to nonsense online at least remains a possibility.

A more certain challenge is “interpretability” — aka explaining what underlies the patterns the deep learning technology has identified via the spread of fake news.

While algorithmic accountability is very often a challenge for AI technologies, Bronstein admits it’s “more complicated” for geometric deep learning.

“We can potentially identify some features that are the most characteristic of fake vs true news,” he suggests when asked whether some sort of ‘formula’ of fake news can be traced via the data, noting that while they haven’t yet tried to do this they did observe “some polarization”.

“There are basically two communities in the social network that communicate mainly within the community and rarely across the communities,” he says. “Basically it is less likely that somebody who tweets a fake story will be retweeted by somebody who mostly tweets real stories. There is a manifestation of this polarization. It might be related to these theories of echo chambers and various biases that exist. Again we didn’t dive into trying to explain it from a sociological point of view — but we observed it.”

So while, in recent years, there have been some academic efforts to debunk the notion that social media users are stuck inside filter bubble bouncing their own opinions back at them, Fabula’s analysis of the landscape of social media opinions suggests they do exist — albeit, just not encasing every Internet user.

Bronstein says the next steps for the startup is to scale its prototype to be able to deal with multiple requests so it can get the API to market in 2019 — and start charging publishers for a truth-risk/reliability score for each piece of content they host.

“We’ll probably be providing some restricted access maybe with some commercial partners to test the API but eventually we would like to make it useable by multiple people from different businesses,” says requests. “Potentially also private users — journalists or social media platforms or advertisers. Basically we want to be… a clearing house for news.”


Source: The Tech Crunch

Read More

How students are founding, funding and joining startups

Posted by on Feb 6, 2019 in Accel, Accel Scholars, Alumni Ventures Group, Amanda Bradford, Artificial Intelligence, Bill Gates, boston, coinbase, Column, CRM, CrunchBase, distributed systems, Dorm Room Fund, Drew Houston, Dropbox, editor-in-chief, Energy, entrepreneurship, Facebook, Finance, FiscalNote, Forward, General Catalyst, Graduate Fund, greylock, harvard, Jeremy Liew, Kleiner Perkins, lightspeed, Mark Zuckerberg, MIT, Pear Ventures, peter boyce, Pinterest, Private Equity, Series A, stanford, Start-Up Chile, Startup company, Startups, TC, TechStars, True Ventures, Ubiquity6, uc-berkeley, United States, upenn, Venture Capital, venture capital Firms, Warby Parker, Y Combinator | 0 comments

There has never been a better time to start, join or fund a startup as a student. 

Young founders who want to start companies while still in school have an increasing number of resources to tap into that exist just for them. Students that want to learn how to build companies can apply to an increasing number of fast-track programs that allow them to gain valuable early stage operating experience. The energy around student entrepreneurship today is incredible. I’ve been immersed in this community as an investor and adviser for some time now, and to say the least, I’m continually blown away by what the next generation of innovators are dreaming up (from Analytical Space’s global data relay service for satellites to Brooklinen’s reinvention of the luxury bed).

Bill Gates in 1973

First, let’s look at student founders and why they’re important. Student entrepreneurs have long been an important foundation of the startup ecosystem. Many students wrestle with how best to learn while in school —some students learn best through lectures, while more entrepreneurial students like author Julian Docks find it best to leave the classroom altogether and build a business instead.

Indeed, some of our most iconic founders are Microsoft’s Bill Gates and Facebook’s Mark Zuckerberg, both student entrepreneurs who launched their startups at Harvard and then dropped out to build their companies into major tech giants. A sample of the current generation of marquee companies founded on college campuses include Snap at Stanford ($29B valuation at IPO), Warby Parker at Wharton (~$2B valuation), Rent The Runway at HBS (~$1B valuation), and Brex at Stanford (~$1B valuation).

Some of today’s most celebrated tech leaders built their first ventures while in school — even if some student startups fail, the critical first-time founder experience is an invaluable education in how to build great companies. Perhaps the best example of this that I could find is Drew Houston at Dropbox (~$9B valuation at IPO), who previously founded an edtech startup at MIT that, in his words, provided a: “great introduction to the wild world of starting companies.”

Student founders are everywhere, but the highest concentration of venture-backed student founders can be found at just 5 universities. Based on venture fund portfolio data from the last six years, Harvard, Stanford, MIT, UPenn, and UC Berkeley have produced the highest number of student-founded companies that went on to raise $1 million or more in seed capital. Some prospective students will even enroll in a university specifically for its reputation of churning out great entrepreneurs. This is not to say that great companies are not being built out of other universities, nor does it mean students can’t find resources outside a select number of schools. As you can see later in this essay, there are a number of new ways students all around the country can tap into the startup ecosystem. For further reading, PitchBook produces an excellent report each year that tracks where all entrepreneurs earned their undergraduate degrees.

Student founders have a number of new media resources to turn to. New email newsletters focused on student entrepreneurship like Justine and Olivia Moore’s Accelerated and Kyle Robertson’s StartU offer new channels for young founders to reach large audiences. Justine and Olivia, the minds behind Accelerated, have a lot of street cred— they launched Stanford’s on-campus incubator Cardinal Ventures before landing as investors at CRV.

StartU goes above and beyond to be a resource to founders they profile by helping to connect them with investors (they’re active at 12 universities), and run a podcast hosted by their Editor-in-Chief Johnny Hammond that is top notch. My bet is that traditional media will point a larger spotlight at student entrepreneurship going forward.

New pools of capital are also available that are specifically for student founders. There are four categories that I call special attention to:

  • University-affiliated accelerator programs
  • University-affiliated angel networks
  • Professional venture funds investing at specific universities
  • Professional venture funds investing through student scouts

While it is difficult to estimate exactly how much capital has been deployed by each, there is no denying that there has been an explosion in the number of programs that address the pre-seed phase. A sample of the programs available at the Top 5 universities listed above are in the graphic below — listing every resource at every university would be difficult as there are so many.

One alumni-centric fund to highlight is the Alumni Ventures Group, which pools LP capital from alumni at specific universities, then launches individual venture funds that invest in founders connected to those universities (e.g. students, alumni, professors, etc.). Through this model, they’ve deployed more than $200M per year! Another highlight has been student scout programs — which vary in the degree of autonomy and capital invested — but essentially empower students to identify and fund high-potential student-founded companies for their parent venture funds. On campuses with a large concentration of student founders, it is not uncommon to find student scouts from as many as 12 different venture funds actively sourcing deals (as is made clear from David Tao’s analysis at UC Berkeley).

Investment Team at Rough Draft Ventures

In my opinion, the two institutions that have the most expansive line of sight into the student entrepreneurship landscape are First Round’s Dorm Room Fund and General Catalyst’s Rough Draft VenturesSince 2012, these two funds have operated a nationwide network of student scouts that have invested $20K — $25K checks into companies founded by student entrepreneurs at 40+ universities. “Scout” is a loose term and doesn’t do it justice — the student investors at these two funds are almost entirely autonomous, have built their own platform services to support portfolio companies, and have launched programs to incubate companies built by female founders and founders of color. Another student-run fund worth noting that has reach beyond a single region is Contrary Capital, which raised $2.2M last year. They do a particularly great job of reaching founders at a diverse set of schools — their network of student scouts are active at 45 universities and have spoken with 3,000 founders per year since getting started. Contrary is also testing out what they describe as a “YC for university-based founders”. In their first cohort, 100% of their companies raised a pre-seed round after Contrary’s demo day. Another even more recently launched organization is The MBA Fund, which caters to founders from the business schools at Harvard, Wharton, and Stanford. While super exciting, these two funds only launched very recently and manage portfolios that are not large enough for analysis just yet.

Over the last few months, I’ve collected and cross-referenced publicly available data from both Dorm Room Fund and Rough Draft Ventures to assess the state of student entrepreneurship in the United States. Companies were pulled from each fund’s portfolio page, then checked against Crunchbase for amount raised, accelerator participation, and other metrics. If you’d like to sift through the data yourself, feel free to ping me — my email can be found at the end of this article. To be clear, this does not represent the full scope of investment activity at either fund — many companies in the portfolios of both funds remain confidential and unlisted for good reasons (e.g. startups working in stealth). In fact, the In addition, data for early stage companies is notoriously variable in quality, even with Crunchbase. You should read these insights as directional only, given the debatable confidence interval. Still, the data is still interesting and give good indicators for the health of student entrepreneurship today.

Dorm Room Fund and Rough Draft Ventures have invested in 230+ student-founded companies that have gone on to raise nearly $1 billion in follow on capital. These funds have invested in a diverse range of companies, from govtech (e.g. mark43, raised $77M+ and FiscalNote, raised $50M+) to space tech (e.g. Capella Space, raised ~$34M). Several portfolio companies have had successful exits, such as crypto startup Distributed Systems (acquired by Coinbase) and social networking startup tbh (acquired by Facebook). While it is too early to evaluate the success of these funds on a returns basis (both were launched just 6 years ago), we can get a sense of success by evaluating the rates by which portfolio companies raise additional capital. Taken together, 34% of DRF and RDV companies in our data set have raised $1 million or more in seed capital. For a rough comparison, CB Insights cites that 40% of YC companies and 48% of Techstars companies successfully raise follow on capital (defined as anything above $750K). Certainly within the ballpark!

Source: Crunchbase

Dorm Room Fund and Rough Draft Ventures companies in our data set have an 11–12% rate of survivorship to Series A. As a benchmark, a previous partner at Y Combinator shared that 20% of their accelerator companies raise Series A capital (YC declined to share the official figure, but it’s likely a stat that is increasing given their new Series A support programs. For further reading, check out YC’s reflection on what they’ve learned about helping their companies raise Series A funding). In any case, DRF and RDV’s numbers should be taken with a grain of salt, as the average age of their portfolio companies is very low and raising Series A rounds generally takes time. Ultimately, it is clear that DRF and RDV are active in the earlier (and riskier) phases of the startup journey.

Dorm Room Fund and Rough Draft Ventures send 18–25% of their portfolio companies to Y Combinator or Techstars. Given YC’s 1.5% acceptance rate as reported in Fortune, this is quite significant! Internally, these two funds offer founders an opportunity to participate in mock interviews with YC and Techstars alumni, as well as tap into their communities for peer support (e.g. advice on pitch decks and application content). As a result, Dorm Room Fund and Rough Draft Ventures regularly send cohorts of founders to these prestigious accelerator programs. Based on our data set, 17–20% of DRF and RDV companies that attend one of these accelerators end up raising Series A venture financing.

Source: Crunchbase

Dorm Room Fund and Rough Draft Ventures don’t invest in the same companies. When we take a deeper look at one specific ecosystem where these two funds have been equally active over the last several years — Boston — we actually see that the degree of investment overlap for companies that have raised $1M+ seed rounds sits at 26%. This suggests that these funds are either a) seeing different dealflow or b) have widely different investment decision-making.

Source: Crunchbase

Dorm Room Fund and Rough Draft Ventures should not just be measured by a returns-basis today, as it’s too early. I hypothesize that DRF and RDV are actually encouraging more entrepreneurial activity in the ecosystem (more students decide to start companies while in school) as well as improving long-term founder outcomes amongst students they touch (portfolio founders build bigger and more successful companies later in their careers). As more students start companies, there’s likely a positive feedback loop where there’s increasing peer pressure to start a company or lean on friends for founder support (e.g. feedback, advice, etc).Both of these subjects warrant additional study, but it’s likely too early to conduct these analyses today.

Dorm Room Fund and Rough Draft Ventures have impressive alumni that you will want to track. 1 in 4 alumni partners are founders, and 29% of these founder alumni have raised $1M+ seed rounds for their companies. These include Anjney Midha’s augmented reality startup Ubiquity6 (raised $37M+), Shubham Goel’s investor-focused CRM startup Affinity (raised $13M+), Bruno Faviero’s AI security software startup Synapse (raised $6M+), Amanda Bradford’s dating app The League (raised $2M+), and Dillon Chen’s blockchain startup Commonwealth Labs (raised $1.7M). It makes sense to me that alumni from these communities that decide to start companies have an advantage over their peers — they know what good companies look like and they can tap into powerful networks of young talent / experienced investors.

Beyond Dorm Room Fund and Rough Draft Ventures, some venture capital firms focus on incubation for student-founded startups. Credit should first be given to Lightspeed for producing the amazing Summer Fellows bootcamp experience for promising student founders — after all, Pinterest was built there! Jeremy Liew gives a good overview of the program through his sit-down interview with Afterbox’s Zack Banack. Based on a study they conducted last year, 40% of Lightspeed Summer Fellows alumni are currently active founders. Pear Ventures also has an impressive summer incubator program where 85% of its companies successfully complete a fundraise. Index Ventures is the latest to build an incubator program for student founders, and even accepts founders who want to work on an idea part-time while completing a summer internship.

Let’s now look at students who want to join a startup before founding one. Venture funds have historically looked to tap students for talent, and are expanding the engagement lifecycle. The longest running programs include Kleiner Perkins’<strong class=”m_1196721721246259147gmail-markup–strong m_1196721721246259147gmail-markup–p-strong”> KP Fellows and True Ventures’ TEC Fellows, which focus on placing the next generation’s most promising product managers, engineers, and designers into the portfolio companies of their parent venture funds.

There’s also the secretive Greylock X, a referral-based hand-picked group of the best student engineers in Silicon Valley (among their impressive alumni are founders like Yasyf Mohamedali and Joe Kahn, the folks behind First Round-backed Karuna Health). As these programs have matured, these firms have recognized the long-run value of engaging the alumni of their programs.

More and more alumni are “coming back” to the parent funds as entrepreneurs, like KP Fellow Dylan Field of Figma (and is also hosting a KP Fellow, closing a full circle loop!). Based on their latest data, 10% of KP Fellows alumni are founders — that’s a lot given the fact that their community has grown to 500! This helps explain why Kleiner Perkins has created a structured path to receive $100K in seed funding to companies founded by KP Fellow alumni. It looks like venture funds are beginning to invest in student programs as part of their larger platform strategy, which can have a real impact over the long term (for further reading, see this analysis of platform strategy outcomes by USV’s Bethany Crystal).

KP Fellows in San Francisco

Venture funds are doubling down on student talent engagement — in just the last 18 months, 4 funds have launched student programs. It’s encouraging to see new funds follow in the footsteps of First Round, General Catalyst, Kleiner Perkins, Greylock, and Lightspeed. In 2017, Accel launched their Accel Scholars program to engage top talent at UC Berkeley and Stanford. In 2018, we saw 8VC Fellows, NEA Next, and Floodgate Insiders all launch, targeting elite universities outside of Silicon Valley. Y Combinator implemented Early Decision, which allows student founders to apply one batch early to help with academic scheduling. Most recently, at the start of 2019, First Round launched the Graduate Fund (staffed by Dorm Room Fund alumni) to invest in founders who are recent graduates or young alumni.

Given more time, I’d love to study the rates by which student founders start another company following investments from student scout funds, as well as whether or not they’re more successful in those ventures. In any case, this is an escalation in the number of venture funds that have started to get serious about engaging students — both for talent and dealflow.

Student entrepreneurship 2.0 is here. There are more structured paths to success for students interested in starting or joining a startup. Founders have more opportunities to garner press, seek advice, raise capital, and more. Venture funds are increasingly leveraging students to help improve the three F’s — finding, funding, and fixing. In my personal view, I believe it is becoming more and more important for venture funds to gain mindshare amongst the next generation of founders and operators early, while still in school.

I can’t wait to see what’s next for student entrepreneurship in 2019. If you’re interested in digging in deeper (I’m human — I’m sure I haven’t covered everything related to student entrepreneurship here) or learning more about how you can start or join a startup while still in school, shoot me a note at sxu@dormroomfund.comA massive thanks to Phin Barnes, Rei Wang, Chauncey Hamilton, Peter Boyce, Natalie Bartlett, Denali Tietjen, Eric Tarczynski, Will Robbins, Jasmine Kriston, Alicia Lau, Johnny Hammond, Bruno Faviero, Athena Kan, Shohini Gupta, Alex Immerman, Albert Dong, Phillip Hua-Bon-Hoa, and Trevor Sookraj for your incredible encouragement, support, and insight during the writing of this essay.


Source: The Tech Crunch

Read More

Facebook warned over privacy risks of merging messaging platforms

Posted by on Feb 2, 2019 in antitrust, Apps, Brian Acton, business intelligence, data protection, e2e encryption, Europe, European Commission, Facebook, GDPR, General Data Protection Regulation, instagram, Ireland, Mark Zuckerberg, messaging apps, Privacy, Social, Social Media, WhatsApp | 0 comments

Facebook’s lead data protection regulator in Europe has asked the company for an “urgent briefing” regarding plans to integrate the underlying infrastructure of its three social messaging platforms.

In a statement posted to its website late last week the Irish Data Protection Commission writes: “Previous proposals to share data between Facebook companies have given rise to significant data protection concerns and the Irish DPC will be seeking early assurances that all such concerns will be fully taken into account by Facebook in further developing this proposal.”

Last week the New York Times broke the news that Facebook intends to unify the backend infrastructure of its three separate products, couching it as Facebook founder Mark Zuckerberg asserting control over acquisitions whose founders have since left the building.

Instagram founders, Kevin Systrom and Mike Krieger, left Facebook last year, as a result of rising tensions over reduced independence, according to our sources.

While WhatsApp’s founders left Facebook earlier, with Brian Acton departing in late 2017 and Jan Koum sticking it out until spring 2018. The pair reportedly clashed with Facebook execs over user privacy and differences over how to monetize the end-to-end encrypted platform.

Acton later said Facebook had coached him to tell European regulators assessing whether to approve the 2014 merger that it would be “really difficult” for the company to combine WhatsApp and Facebook user data.

In the event, Facebook went on to link accounts across the two platforms just two years after the acquisition closed. It was later hit with a $122M penalty from the European Commission for providing “incorrect or misleading” information at the time of the merger. Though Facebook claimed it had made unintentional “errors” in the 2014 filing.

A further couple of years on and Facebook has now graduated to seeking full platform unification of separate messaging products.

“We want to build the best messaging experiences we can; and people want messaging to be fast, simple, reliable and private,” a spokesperson told us when we asked for a response to the NYT report. “We’re working on making more of our messaging products end-to-end encrypted and considering ways to make it easier to reach friends and family across networks.”

“As you would expect, there is a lot of discussion and debate as we begin the long process of figuring out all the details of how this will work,” the spokesperson added, confirming the substance of the NYT report.

There certainly would be a lot of detail to be worked out. Not least the feasibility of legally merging user data across distinct products in Europe, where a controversial 2016 privacy u-turn by WhatsApp — when it suddenly announced it would after all share user data with parent company Facebook (despite previously saying it would never do so), including sharing data for marketing purposes — triggered swift regulatory intervention.

Facebook was forced to suspend marketing-related data flows in Europe. Though it has continued sharing data between WhatsApp and Facebook for security and business intelligence purposes, leading to the French data watchdog to issue a formal notice at the end of 2017 warning the latter transfers also lack a legal basis.

A court in Hamburg, Germany, also officially banned Facebook from using WhatsApp user data for its own purposes.

Early last year, following an investigation into the data-sharing u-turn, the UK’s data watchdog obtained an undertaking from WhatsApp that it would not share personal data with Facebook until the two services could do so in a way that’s compliant with the region’s strict privacy framework, the General Data Protection Regulation (GDPR).

Facebook only avoided a fine from the UK regulator because it froze data flows after the regulatory intervention. But the company clearly remains on watch — and any fresh moves to further integrate the platforms would trigger instant scrutiny, evidenced by the shot across the bows from the DPC in Ireland (Facebook’s international HQ is based in the country).

The 2016 WhatsApp-Facebook privacy u-turn also occurred prior to Europe’s GDPR coming into force. And the updated privacy framework includes a regime of substantially larger maximum fines for any violations.

Under the regulation watchdogs also have the power to ban companies from processing data. Which, in the case of a revenue-rich data-mining giant like Facebook, could be a far more potent disincentive than even a billion dollar fine.

We’ve reached out to Facebook for comment on the Irish DPC’s statement and will update this report with any response.

Here’s the full statement from the Irish watchdog:

While we understand that Facebook’s proposal to integrate the Facebook, WhatsApp and Instagram platforms is at a very early conceptual stage of development, the Irish DPC has asked Facebook Ireland for an urgent briefing on what is being proposed. The Irish DPC will be very closely scrutinising Facebook’s plans as they develop, particularly insofar as they involve the sharing and merging of personal data between different Facebook companies. Previous proposals to share data between Facebook companies have given rise to significant data protection concerns and the Irish DPC will be seeking early assurances that all such concerns will be fully taken into account by Facebook in further developing this proposal. It must be emphasised that ultimately the proposed integration can only occur in the EU if it is capable of meeting all of the requirements of the GDPR.

Facebook may be hoping that extending end-to-end encryption to Instagram as part of its planned integration effort, per the NYT report, could offer a technical route to stop any privacy regulators’ hammers from falling.

Though use of e2e encryption still does not shield metadata from being harvested. And metadata offers a rich source of inferences about individuals which, under EU law, would certainly constitute personal data. So even with robust encryption across the board of Instagram, Facebook and WhatsApp the unified messaging platforms could still collectively leak plenty of personal data to their data-mining parent.

Facebook’s apps are also not open source. So even WhatsApp, which uses the respected Signal Protocol for its e2e encryption, remains under its control — with no ability for external audits to verify exactly what happens to data inside the app (such as checking what data gets sent back to Facebook). Users still have to trust Facebook’s implementation but regulators might demand actual proof of bona fide messaging privacy.

Nonetheless, the push by Facebook to integrate separate messaging products onto a single unified platform could be a defensive strategy — intended to throw dust in the face of antitrust regulators as political scrutiny of its market position and power continues to crank up. Though it would certainly be an aggressive defence to more tightly knit separate platforms together.

But if the risk Facebook is trying to shrink is being forced, by competition regulators, to sell off one or two of its messaging platforms it may feel it has nothing to lose by making it technically harder to break its business apart.

At the time of the acquisitions of Instagram and WhatsApp Facebook promised autonomy to their founders. Zuckerberg has since changed his view, according to the NYT — believing integrating all three will increase the utility of each and thus provide a disincentive for users to abandon each service.

It may also be a hedge against any one of the three messaging platforms decreasing in popularity by furnishing the business with internal levers it can throw to try to artifically juice activity across a less popular app by encouraging cross-platform usage.

And given the staggering size of the Facebook messaging empire, which globally sprawls to 2.5BN+ humans, user resistance to centralized manipulation via having their buttons pushed to increase cross-platform engagement across Facebook’s business may be futile without regulatory intervention.


Source: The Tech Crunch

Read More

The facts about Facebook

Posted by on Jan 26, 2019 in Adtech, Advertising Tech, Artificial Intelligence, Europe, Facebook, Mark Zuckerberg, Privacy, Security, Social, Social Media, surveillance, TC | 0 comments

This is a critical reading of Facebook founder Mark Zuckerberg’s article in the WSJ on Thursday, also entitled The Facts About Facebook

Yes Mark, you’re right; Facebook turns 15 next month. What a long time you’ve been in the social media business! We’re curious as to whether you’ve also been keeping count of how many times you’ve been forced to apologize for breaching people’s trust or, well, otherwise royally messing up over the years.

It’s also true you weren’t setting out to build “a global company”. The predecessor to Facebook was a ‘hot or not’ game called ‘FaceMash’ that you hacked together while drinking beer in your Harvard dormroom. Your late night brainwave was to get fellow students to rate each others’ attractiveness — and you weren’t at all put off by not being in possession of the necessary photo data to do this. You just took it; hacking into the college’s online facebooks and grabbing people’s selfies without permission.

Blogging about what you were doing as you did it, you wrote: “I almost want to put some of these faces next to pictures of some farm animals and have people vote on which is more attractive.” Just in case there was any doubt as to the ugly nature of your intention. 

The seeds of Facebook’s global business were thus sown in a crude and consentless game of clickbait whose idea titillated you so much you thought nothing of breaching security, privacy, copyright and decency norms just to grab a few eyeballs.

So while you may not have instantly understood how potent this ‘outrageous and divisive’ eyeball-grabbing content tactic would turn out to be — oh hai future global scale! — the core DNA of Facebook’s business sits in that frat boy discovery where your eureka Internet moment was finding you could win the attention jackpot by pitting people against each other.

Pretty quickly you also realized you could exploit and commercialize human one-upmanship — gotta catch em all friend lists! popularity poke wars! — and stick a badge on the resulting activity, dubbing it ‘social’.

FaceMash was antisocial, though. And the unpleasant flipside that can clearly flow from ‘social’ platforms is something you continue not being nearly honest nor open enough about. Whether it’s political disinformation, hate speech or bullying, the individual and societal impacts of maliciously minded content shared and amplified using massively mainstream tools you control is now impossible to ignore.

Yet you prefer to play down these human impacts; as a “crazy idea”, or by implying that ‘a little’ amplified human nastiness is the necessary cost of being in the big multinational business of connecting everyone and ‘socializing’ everything.

But did you ask the father of 14-year-old Molly Russell, a British schoolgirl who took her own life in 2017, whether he’s okay with your growth vs controls trade-off? “I have no doubt that Instagram helped kill my daughter,” said Russell in an interview with the BBC this week.

After her death, Molly’s parents found she had been following accounts on Instagram that were sharing graphic material related to self-harming and suicide, including some accounts that actively encourage people to cut themselves. “We didn’t know that anything like that could possibly exist on a platform like Instagram,” said Russell.

Without a human editor in the mix, your algorithmic recommendations are blind to risk and suffering. Built for global scale, they get on with the expansionist goal of maximizing clicks and views by serving more of the same sticky stuff. And more extreme versions of things users show an interest in to keep the eyeballs engaged.

So when you write about making services that “billions” of “people around the world love and use” forgive us for thinking that sounds horribly glib. The scales of suffering don’t sum like that. If your entertainment product has whipped up genocide anywhere in the world — as the UN said Facebook did in Myanmar — it’s failing regardless of the proportion of users who are having their time pleasantly wasted on and by Facebook.

And if your algorithms can’t incorporate basic checks and safeguards so they don’t accidentally encourage vulnerable teens to commit suicide you really don’t deserve to be in any consumer-facing business at all.

Yet your article shows no sign you’ve been reflecting on the kinds of human tragedies that don’t just play out on your platform but can be an emergent property of your targeting algorithms.

You focus instead on what you call “clear benefits to this business model”.

The benefits to Facebook’s business are certainly clear. You have the billions in quarterly revenue to stand that up. But what about the costs to the rest of us? Human costs are harder to quantify but you don’t even sound like you’re trying.

You do write that you’ve heard “many questions” about Facebook’s business model. Which is most certainly true but once again you’re playing down the level of political and societal concern about how your platform operates (and how you operate your platform) — deflecting and reframing what Facebook is to cast your ad business a form of quasi philanthropy; a comfortable discussion topic and self-serving idea you’d much prefer we were all sold on.

It’s also hard to shake the feeling that your phrasing at this point is intended as a bit of an in-joke for Facebook staffers — to smirk at the ‘dumb politicians’ who don’t even know how Facebook makes money.

Y’know, like you smirked…

Then you write that you want to explain how Facebook operates. But, thing is, you don’t explain — you distract, deflect, equivocate and mislead, which has been your business’ strategy through many months of scandal (that and worst tactics — such as paying a PR firm that used oppo research tactics to discredit Facebook critics with smears).

Dodging is another special power; such as how you dodged repeat requests from international parliamentarians to be held accountable for major data misuse and security breaches.

The Zuckerberg ‘open letter’ mansplain, which typically runs to thousands of blame-shifting words, is another standard issue production from the Facebook reputation crisis management toolbox.

And here you are again, ironically enough, mansplaining in a newspaper; an industry that your platform has worked keenly to gut and usurp, hungry to supplant editorially guided journalism with the moral vacuum of algorithmically geared space-filler which, left unchecked, has been shown, time and again, lifting divisive and damaging content into public view.

The latest Zuckerberg screed has nothing new to say. It’s pure spin. We’ve read scores of self-serving Facebook apologias over the years and can confirm Facebook’s founder has made a very tedious art of selling abject failure as some kind of heroic lack of perfection.

But the spin has been going on for far, far too long. Fifteen years, as you remind us. Yet given that hefty record it’s little wonder you’re moved to pen again — imagining that another word blast is all it’ll take for the silly politicians to fall in line.

Thing is, no one is asking Facebook for perfection, Mark. We’re looking for signs that you and your company have a moral compass. Because the opposite appears to be true. (Or as one UK parliamentarian put it to your CTO last year: “I remain to be convinced that your company has integrity”.)

Facebook has scaled to such an unprecedented, global size exactly because it has no editorial values. And you say again now you want to be all things to all men. Put another way that means there’s a moral vacuum sucking away at your platform’s core; a supermassive ethical blackhole that scales ad dollars by the billions because you won’t tie the kind of process knots necessary to treat humans like people, not pairs of eyeballs.

You don’t design against negative consequences or to pro-actively avoid terrible impacts — you let stuff happen and then send in the ‘trust & safety’ team once the damage has been done.

You might call designing against negative consequences a ‘growth bottleneck’; others would say it’s having a conscience.

Everything standing in the way of scaling Facebook’s usage is, under the Zuckerberg regime, collateral damage — hence the old mantra of ‘move fast and break things’ — whether it’s social cohesion, civic values or vulnerable individuals.

This is why it takes a celebrity defamation lawsuit to force your company to dribble a little more resource into doing something about scores of professional scammers paying you to pop their fraudulent schemes in a Facebook “ads” wrapper. (Albeit, you’re only taking some action in the UK in this particular case.)

Funnily enough — though it’s not at all funny and it doesn’t surprise us — Facebook is far slower and patchier when it comes to fixing things it broke.

Of course there will always be people who thrive with a digital megaphone like Facebook thrust in their hand. Scammers being a pertinent example. But the measure of a civilized society is how it protects those who can’t defend themselves from targeted attacks or scams because they lack the protective wrap of privilege. Which means people who aren’t famous. Not public figures like Martin Lewis, the consumer champion who has his own platform and enough financial resources to file a lawsuit to try to make Facebook do something about how its platform supercharges scammers.

Zuckerberg’s slippery call to ‘fight bad content with more content’ — or to fight Facebook-fuelled societal division by shifting even more of the apparatus of civic society onto Facebook — fails entirely to recognize this asymmetry.

And even in the Lewis case, Facebook remains a winner; Lewis dropped his suit and Facebook got to make a big show of signing over £500k worth of ad credit coupons to a consumer charity that will end up giving them right back to Facebook.

The company’s response to problems its platform creates is to look the other way until a trigger point of enough bad publicity gets reached. At which critical point it flips the usual crisis PR switch and sends in a few token clean up teams — who scrub a tiny proportion of terrible content; or take down a tiny number of fake accounts; or indeed make a few token and heavily publicized gestures — before leaning heavily on civil society (and on users) to take the real strain.

You might think Facebook reaching out to respected external institutions is a positive step. A sign of a maturing mindset and a shift towards taking greater responsibility for platform impacts. (And in the case of scam ads in the UK it’s donating £3M in cash and ad credits to a bona fide consumer advice charity.)

But this is still Facebook dumping problems of its making on an already under-resourced and over-worked civic sector at the same time as its platform supersizes their workload.

In recent years the company has also made a big show of getting involved with third party fact checking organizations across various markets — using these independents to stencil in a PR strategy for ‘fighting fake news’ that also entails Facebook offloading the lion’s share of the work. (It’s not paying fact checkers anything, given the clear conflict that would represent it obviously can’t).

So again external organizations are being looped into Facebook’s mess — in this case to try to drain the swamp of fakes being fenced and amplified on its platform — even as the scale of the task remains hopeless, and all sorts of junk continues to flood into and pollute the public sphere.

What’s clear is that none of these organizations has the scale or the resources to fix problems Facebook’s platform creates. Yet it serves Facebook’s purposes to be able to point to them trying.

And all the while Zuckerberg is hard at work fighting to fend off regulation that could force his company to take far more care and spend far more of its own resources (and profits) monitoring the content it monetizes by putting it in front of eyeballs.

The Facebook founder is fighting because he knows his platform is a targeted attack; On individual attention, via privacy-hostile behaviorally targeted ads (his euphemism for this is “relevant ads”); on social cohesion, via divisive algorithms that drive outrage in order to maximize platform engagement; and on democratic institutions and norms, by systematically eroding consensus and the potential for compromise between the different groups that every society is comprised of.

In his WSJ post Zuckerberg can only claim Facebook doesn’t “leave harmful or divisive content up”. He has no defence against Facebook having put it up and enabled it to spread in the first place.

Sociopaths relish having a soapbox so unsurprisingly these people find a wonderful home on Facebook. But where does empathy fit into the antisocial media equation?

As for Facebook being a ‘free’ service — a point Zuckerberg is most keen to impress in his WSJ post — it’s of course a cliché to point out that ‘if it’s free you’re the product’. (Or as the even older saying goes: ‘There’s no such thing as a free lunch’).

But for the avoidance of doubt, “free” access does not mean cost-free access. And in Facebook’s case the cost is both individual (to your attention and your privacy); and collective (to the public’s attention and to social cohesion).

The much bigger question is who actually benefits if “everyone” is on Facebook, as Zuckerberg would prefer. Facebook isn’t the Internet. Facebook doesn’t offer the sole means of communication, digital or otherwise. People can, and do, ‘connect’ (if you want to use such a transactional word for human relations) just fine without Facebook.

So beware the hard and self-serving sell in which Facebook’s 15-year founder seeks yet again to recast privacy as an unaffordable luxury.

Actually, Mark, it’s a fundamental human right.

The best argument Zuckerberg can muster for his goal of universal Facebook usage being good for anything other than his own business’ bottom line is to suggest small businesses could use that kind of absolute reach to drive extra growth of their own.

Though he only provides a few general data-points to support the claim; saying there are “more than 90M small businesses on Facebook” which “make up a large part of our business” (how large?) — and claiming “most” (51%?) couldn’t afford TV ads or billboards (might they be able to afford other online or newspaper ads though?); he also cites a “global survey” (how many businesses surveyed?), presumably run by Facebook itself, which he says found “half the businesses on Facebook say they’ve hired more people since they joined” (but how did you ask the question, Mark?; we’re concerned it might have been rather leading), and from there he leaps to the implied conclusion that “millions” of jobs have essentially been created by Facebook.

But did you control for common causes Mark? Or are you just trying to take credit for others’ hard work because, well, it’s politically advantageous for you to do so?

Whether Facebook’s claims about being great for small business stand up to scrutiny or not, if people’s fundamental rights are being wholesale flipped for SMEs to make a few extra bucks that’s an unacceptable trade off.

“Millions” of jobs suggestively linked to Facebook sure sounds great — but you can’t and shouldn’t overlook disproportionate individual and societal costs, as Zuckerberg is urging policymakers to here.

Let’s also not forget that some of the small business ‘jobs’ that Facebook’s platform can take definitive and major credit for creating include the Macedonia teens who became hyper-adept at seeding Facebook with fake U.S. political news, around the 2016 presidential election. But presumably those aren’t the kind of jobs Zuckerberg is advocating for.

He also repeats the spurious claim that Facebook gives users “complete control” over what it does with personal information collected for advertising.

We’ve heard this time and time again from Zuckerberg and yet it remains pure BS.

WASHINGTON, DC – APRIL 10: Facebook co-founder, Chairman and CEO Mark Zuckerberg concludes his testimony before a combined Senate Judiciary and Commerce committee hearing in the Hart Senate Office Building on Capitol Hill April 10, 2018 in Washington, DC. Zuckerberg, 33, was called to testify after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Win McNamee/Getty Images)

Yo Mark! First up we’re still waiting for your much trumpeted ‘Clear History’ tool. You know, the one you claimed you thought of under questioning in Congress last year (and later used to fend off follow up questions in the European Parliament).

Reportedly the tool is due this Spring. But even when it does finally drop it represents another classic piece of gaslighting by Facebook, given how it seeks to normalize (and so enable) the platform’s pervasive abuse of its users’ data.

Truth is, there is no master ‘off’ switch for Facebook’s ongoing surveillance. Such a switch — were it to exist — would represent a genuine control for users. But Zuckerberg isn’t offering it.

Instead his company continues to groom users into accepting being creeped on by offering pantomime settings that boil down to little more than privacy theatre — if they even realize they’re there.

‘Hit the button! Reset cookies! Delete browsing history! Keep playing Facebook!’

An interstitial reset is clearly also a dilute decoy. It’s not the same as being able to erase all extracted insights Facebook’s infrastructure continuously mines from users, using these derivatives to target people with behavioral ads; tracking and profiling on an ongoing basis by creeping on browsing activity (on and off Facebook), and also by buying third party data on its users from brokers.

Multiple signals and inferences are used to flesh out individual ad profiles on an ongoing basis, meaning the files are never static. And there’s simply no way to tell Facebook to burn your digital ad mannequin. Not even if you delete your Facebook account.

Nor, indeed, is there a way to get a complete read out from Facebook on all the data it’s attached to your identity. Even in Europe, where companies are subject to strict privacy laws that place a legal requirement on data controllers to disclose all personal data they hold on a person on request, as well as who they’re sharing it with, for what purposes, under what legal grounds.

Last year Paul-Olivier Dehaye, the founder of PersonalData.IO, a startup that aims to help people control how their personal data is accessed by companies, recounted in the UK parliament how he’d spent years trying to obtain all his personal information from Facebook — with the company resorting to legal arguments to block his subject access request.

Dehaye said he had succeeded in extracting a bit more of his data from Facebook than it initially handed over. But it was still just a “snapshot”, not an exhaustive list, of all the advertisers who Facebook had shared his data with. This glimpsed tip implies a staggeringly massive personal data iceberg lurking beneath the surface of each and every one of the 2.2BN+ Facebook users. (Though the figure is likely even more massive because it tracks non-users too.)

Zuckerberg’s “complete control” wording is therefore at best self-serving and at worst an outright lie. Facebook’s business has complete control of users by offering only a superficial layer of confusing and fiddly, ever-shifting controls that demand continued presence on the platform to use them, and ongoing effort to keep on top of settings changes (which are always, to a fault, privacy hostile), making managing your personal data a life-long chore.

Facebook’s power dynamic puts the onus squarely on the user to keep finding and hitting reset button.

But this too is a distraction. Resetting anything on its platform is largely futile, given Facebook retains whatever behavioral insights it already stripped off of your data (and fed to its profiling machinery). And its omnipresent background snooping carries on unchecked, amassing fresh insights you also can’t clear.

Nor does Clear History offer any control for the non-users Facebook tracks via the pixels and social plug-ins it’s larded around the mainstream web. Zuckerberg was asked about so-called shadow profiles in Congress last year — which led to this awkward exchange where he claimed not to know what the phrase refers to.

EU MEPs also seized on the issue, pushing him to respond. He did so by attempting to conflate surveillance and security — by claiming it’s necessary for Facebook to hold this data to keep “bad content out”. Which seems a bit of an ill-advised argument to make given how badly that mission is generally going for Facebook.

Still, Zuckerberg repeats the claim in the WSJ post, saying information collected for ads is “generally important for security and operating our services” — using this to address what he couches as “the important question of whether the advertising model encourages companies like ours to use and store more information than we otherwise would”.

So, essentially, Facebook’s founder is saying that the price for Facebook’s existence is pervasive surveillance of everyone, everywhere, with or without your permission.

Though he doesn’t express that ‘fact’ as a cost of his “free” platform. RIP privacy indeed.

Another pertinent example of Zuckerberg simply not telling the truth when he wrongly claims Facebook users can control their information vis-a-vis his ad business — an example which also happens to underline how pernicious his attempts to use “security” to justify eroding privacy really are — bubbled into view last fall, when Facebook finally confessed that mobile phone numbers users had provided for the specific purpose of enabling two-factor authentication (2FA) to increase the security of their accounts were also used by Facebook for ad targeting.

A company spokesperson told us that if a user wanted to opt out of the ad-based repurposing of their mobile phone data they could use non-phone number based 2FA — though Facebook only added the ability to use an app for 2FA in May last year.

What Facebook is doing on the security front is especially disingenuous BS in that it risks undermining security practice by bundling a respected tool (2FA) with ads that creep on people.

And there’s plenty more of this kind of disingenuous nonsense in Zuckerberg’s WSJ post — where he repeats a claim we first heard him utter last May, at a conference in Paris, when he suggested that following changes made to Facebook’s consent flow, ahead of updated privacy rules coming into force in Europe, the fact European users had (mostly) swallowed the new terms, rather than deleting their accounts en masse, was a sign people were majority approving of “more relevant” (i.e more creepy) Facebook ads.

Au contraire, it shows nothing of the sort. It simply underlines the fact Facebook still does not offer users a free and fair choice when it comes to consenting to their personal data being processed for behaviorally targeted ads — despite free choice being a requirement under Europe’s General Data Protection Regulation (GDPR).

If Facebook users are forced to ‘choose’ between being creeped on or deleting their account on the dominant social service where all their friends are it’s hardly a free choice. (And GDPR complaints have been filed over this exact issue of ‘forced consent‘.)

Add to that, as we said at the time, Facebook’s GDPR tweaks were lousy with manipulative, dark pattern design. So again the company is leaning on users to get the outcomes it wants.

It’s not a fair fight, any which way you look at it. But here we have Zuckerberg, the BS salesman, trying to claim his platform’s ongoing manipulation of people already enmeshed in the network is evidence for people wanting creepy ads.

darkened facebook logo

The truth is that most Facebook users remain unaware of how extensively the company creeps on them (per this recent Pew research). And fiddly controls are of course even harder to get a handle on if you’re sitting in the dark.

Zuckerberg appears to concede a little ground on the transparency and control point when he writes that: “Ultimately, I believe the most important principles around data are transparency, choice and control.” But all the privacy-hostile choices he’s made; and the faux controls he’s offered; and the data mountain he simply won’t ‘fess up to sitting on shows, beyond reasonable doubt, the company cannot and will not self-regulate.

If Facebook is allowed to continue setting its own parameters and choosing its own definitions (for “transparency, choice and control”) users won’t have even one of the three principles, let alone the full house, as well they should. Facebook will just keep moving the goalposts and marking its own homework.

You can see this in the way Zuckerberg fuzzes and elides what his company really does with people’s data; and how he muddies and muddles uses for the data — such as by saying he doesn’t know what shadow profiles are; or claiming users can download ‘all their data’; or that ad profiles are somehow essential for security; or by repurposing 2FA digits to personalize ads too.

How do you try to prevent the purpose limitation principle being applied to regulate your surveillance-reliant big data ad business? Why by mixing the data streams of course! And then trying to sew confusion among regulators and policymakers by forcing them to unpick your mess.

Much like Facebook is forcing civic society to clean up its messy antisocial impacts.

Europe’s GDPR is focusing the conversation, though, and targeted complaints filed under the bloc’s new privacy regime have shown they can have teeth and so bite back against rights incursions.

But before we put another self-serving Zuckerberg screed to rest, let’s take a final look at his description of how Facebook’s ad business works. Because this is also seriously misleading. And cuts to the very heart of the “transparency, choice and control” issue he’s quite right is central to the personal data debate. (He just wants to get to define what each of those words means.)

In the article, Zuckerberg claims “people consistently tell us that if they’re going to see ads, they want them to be relevant”. But who are these “people” of which he speaks? If he’s referring to the aforementioned European Facebook users, who accepted updated terms with the same horribly creepy ads because he didn’t offer them any alternative, we would suggest that’s not a very affirmative signal.

Now if it were true that a generic group of ‘Internet people’ were consistently saying anything about online ads the loudest message would most likely be that they don’t like them. Click through rates are fantastically small. And hence also lots of people using ad blocking tools. (Growth in usage of ad blockers has also occurred in parallel with the increasing incursions of the adtech industrial surveillance complex.)

So Zuckerberg’s logical leap to claim users of free services want to be shown only the most creepy ads is really a very odd one.

Let’s now turn to Zuckerberg’s use of the word “relevant”. As we noted above, this is a euphemism. It conflates many concepts but principally it’s used by Facebook as a cloak to shield and obscure the reality of what it’s actually doing (i.e. privacy-hostile people profiling to power intrusive, behaviourally microtargeted ads) in order to avoid scrutiny of exactly those creepy and intrusive Facebook practices.

Yet the real sleight of hand is how Zuckerberg glosses over the fact that ads can be relevant without being creepy. Because ads can be contextual. They don’t have to be behaviorally targeted.

Ads can be based on — for example — a real-time search/action plus a user’s general location. Without needing to operate a vast, all-pervasive privacy-busting tracking infrastructure to feed open-ended surveillance dossiers on what everyone does online, as Facebook chooses to.

And here Zuckerberg gets really disingenuous because he uses a benign-sounding example of a contextual ad (the example he chooses contains an interest and a general location) to gloss over a detail-light explanation of how Facebook’s people tracking and profiling apparatus works.

“Based on what pages people like, what they click on, and other signals, we create categories — for example, people who like pages about gardening and live in Spain — and then charge advertisers to show ads to that category,” he writes, with that slipped in reference to “other signals” doing some careful shielding work there.

Other categories that Facebook’s algorithms have been found ready and willing to accept payment to run ads against in recent years include “jew-hater”, “How to burn Jews” and “Hitler did nothing wrong”.

Funnily enough Zuckerberg doesn’t mention those actual Facebook microtargeting categories in his glossy explainer of how its “relevant” ads business works. But they offer a far truer glimpse of the kinds of labels Facebook’s business sticks on people.

As we wrote last week, the case against behavioral ads is stacking up. Zuckerberg’s attempt to spin the same self-serving lines should really fool no one at this point.

Nor should regulators be derailed by the lie that Facebook’s creepy business model is the only version of adtech possible. It’s not even the only version of profitable adtech currently available. (Contextual ads have made Google alternative search engine DuckDuckGo profitable since 2014, for example.)

Simply put, adtech doesn’t have to be creepy to work. And ads that don’t creep on people would give publishers greater ammunition to sell ad block using readers on whitelisting their websites. A new generation of people-sensitive startups are also busy working on new forms of ad targeting that bake in privacy by design.

And with legal and regulatory risk rising, intrusive and creepy adtech that demands the equivalent of ongoing strip searches of every Internet user on the planet really look to be on borrowed time.

Facebook’s problem is it scrambled for big data and, finding it easy to suck up tonnes of the personal stuff on the unregulated Internet, built an antisocial surveillance business that needs to capture both sides of its market — eyeballs and advertisers — and keep them buying to an exploitative and even abusive relationship for its business to keep minting money.

Pivoting that tanker would certainly be tough, and in any case who’d trust a Zuckerberg who suddenly proclaimed himself the privacy messiah?

But it sure is a long way from ‘move fast and break things’ to trying to claim there’s only one business model to rule them all.


Source: The Tech Crunch

Read More