Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

When surveillance meets incompetence

Posted by on Feb 19, 2019 in Artificial Intelligence, Asia, China, face recognition, facial recognition, Opinion, Privacy, Security, sensenets, surveillance, TC | 0 comments

Last week brought an extraordinary demonstration of the dangers of operating a surveillance state — especially a shabby one, as China’s apparently is. An unsecured database exposed millions of records of Chinese Muslims being tracked via facial recognition — an ugly trifecta of prejudice, bureaucracy and incompetence.

The security lapse was discovered by Victor Gevers at the GDI Foundation, a security organization working in the public’s interest. Using the infamous but useful Shodan search engine, he found a MongoDB instance owned by the Chinese company SenseNets that stored an ever-increasing number of data points from a facial recognition system apparently at least partially operated by the Chinese government.

Many of the targets of this system were Uyghur Muslims, an ethnic and religious minority in China that the country has persecuted in what it considers secrecy, isolating them in remote provinces in what amount to religious gulags.

This database was no limited sting operation: some 2.5 million people had their locations and other data listed in it. Gevers told me that data points included national ID card number with issuance and expiry dates; sex; nationality; home address; DOB; photo; employer; and known previously visited face detection locations.

This data, Gevers said, plainly “had been visited multiple times by visitors all over the globe. And also the database was ransacked somewhere in December by a known actor,” one known as Warn, who has previously ransomed poorly configured MongoDB instances. So it’s all out there now.

A bad idea, poorly executed, with sad parallels

Courtesy: Victor Gevers/GDI.foundation

First off, it is bad enough that the government is using facial recognition systems to target minorities and track their movements, especially considering the treatment many of these people have already received. The ethical failure on full display here is colossal, but unfortunately no more than we have come to expect from an increasingly authoritarian China.

Using technology as a tool to track and influence the populace is a proud bullet point on the country’s security agenda, but even allowing for the cultural differences that produce something like the social credit rating system, the wholesale surveillance of a minority group is beyond the pale. (And I say this in full knowledge of our own problematic methods in the U.S.)

But to do this thing so poorly is just embarrassing, and should serve as a warning to anyone who thinks a surveillance state can be well administrated — in Congress, for example. We’ve seen security tech theater from China before, in the ineffectual and likely barely functioning AR displays for scanning nearby faces, but this is different — not a stunt but a major effort and correspondingly large failure.

The duty of monitoring these citizens was obviously at least partially outsourced to SenseNets (note this is different from SenseTime, but many of the same arguments will apply to any major people-tracking tech firm), which in a way mirrors the current controversy in the U.S. regarding Amazon’s Rekognition and its use — though on a far, far smaller scale — by police departments. It is not possible for federal or state actors to spin up and support the tech and infrastructure involved in such a system on short notice; like so many other things, the actual execution falls to contractors.

And as SenseNets shows, these contractors can easily get it wrong, sometimes disastrously so.

MongoDB, it should be said, is not inherently difficult to secure; it’s just a matter of choosing the right settings in deployment (settings that are now but were not always the defaults). But for some reason people tend to forget to check those boxes when using the popular system; over and over we’ve seen poorly configured instances being accessible to the public, exposing hundreds of thousands of accounts. This latest one must surely be the largest and most damaging, however.

Gevers pointed out that the server was also highly vulnerable to MySQL exploits among other things, and was of course globally visible on Shodan. “So this was a disaster waiting to happen,” he said.

In fact it was a disaster waiting to happen twice; the company re-exposed the database a few days after securing it, after I wrote this story but before I published:

Living in a glass house

The truth is, though, that any such centralized database of sensitive information is a disaster waiting to happen, for pretty much everyone involved. A facial recognition database full of carefully organized demographic data and personal movements is a hell of a juicy target, and as the SenseTimes instance shows, malicious actors foreign and domestic will waste no time taking advantage of the slightest slip-up (to say nothing of a monumental failure).

We know major actors in the private sector fail at this stuff all the time and, adding insult to injury, are not held responsible — case in point: Equifax. We know our weapons systems are hackable; our electoral systems are trivial to compromise and under active attack; the census is a security disaster; and unsurprisingly the agencies responsible for making all these rickety systems are themselves both unprepared and ignorant, by the government’s own admission… not to mention unconcerned with due process.

The companies and governments of today are simply not equipped to handle the enormousness, or recognize the enormity, of large-scale surveillance. Not only that, but the people that compose those companies and governments are far from reliable themselves, as we have seen from repeated abuse and half-legal uses of surveillance technologies for decades.

Naturally we must also consider the known limitations of these systems, such as their poor record with people of color, the lack of transparency with which they are generally implemented and the inherently indiscriminate nature of their collection methods. The systems themselves are not ready.

A failure at any point in the process of legalizing, creating, securing, using or administrating these systems can have serious political consequences (such as the exposure of a national agenda, which one can imagine could be held for ransom), commercial consequences (who would trust SenseNets after this? The government must be furious) and, most importantly, personal consequences — to the people whose data is being exposed.

And this is all due (here, in China, and elsewhere) to the desire of a government to demonstrate tech superiority, and of a company to enable that and enrich itself in the process.

In the case of this particular database, Gevers says that although the policy of the GDI is one of responsible disclosure, he immediately regretted his role. “Personally it made angry after I found out that I unknowingly helped the company secure its oppression tool,” he told me. “This was not a happy experience.”

The best we can do, and which Gevers did, is to loudly proclaim how bad the idea is and how poorly it has been done, is being done and will be done.


Source: The Tech Crunch

Read More

The facts about Facebook

Posted by on Jan 26, 2019 in Adtech, Advertising Tech, Artificial Intelligence, Europe, Facebook, Mark Zuckerberg, Privacy, Security, Social, Social Media, surveillance, TC | 0 comments

This is a critical reading of Facebook founder Mark Zuckerberg’s article in the WSJ on Thursday, also entitled The Facts About Facebook

Yes Mark, you’re right; Facebook turns 15 next month. What a long time you’ve been in the social media business! We’re curious as to whether you’ve also been keeping count of how many times you’ve been forced to apologize for breaching people’s trust or, well, otherwise royally messing up over the years.

It’s also true you weren’t setting out to build “a global company”. The predecessor to Facebook was a ‘hot or not’ game called ‘FaceMash’ that you hacked together while drinking beer in your Harvard dormroom. Your late night brainwave was to get fellow students to rate each others’ attractiveness — and you weren’t at all put off by not being in possession of the necessary photo data to do this. You just took it; hacking into the college’s online facebooks and grabbing people’s selfies without permission.

Blogging about what you were doing as you did it, you wrote: “I almost want to put some of these faces next to pictures of some farm animals and have people vote on which is more attractive.” Just in case there was any doubt as to the ugly nature of your intention. 

The seeds of Facebook’s global business were thus sown in a crude and consentless game of clickbait whose idea titillated you so much you thought nothing of breaching security, privacy, copyright and decency norms just to grab a few eyeballs.

So while you may not have instantly understood how potent this ‘outrageous and divisive’ eyeball-grabbing content tactic would turn out to be — oh hai future global scale! — the core DNA of Facebook’s business sits in that frat boy discovery where your eureka Internet moment was finding you could win the attention jackpot by pitting people against each other.

Pretty quickly you also realized you could exploit and commercialize human one-upmanship — gotta catch em all friend lists! popularity poke wars! — and stick a badge on the resulting activity, dubbing it ‘social’.

FaceMash was antisocial, though. And the unpleasant flipside that can clearly flow from ‘social’ platforms is something you continue not being nearly honest nor open enough about. Whether it’s political disinformation, hate speech or bullying, the individual and societal impacts of maliciously minded content shared and amplified using massively mainstream tools you control is now impossible to ignore.

Yet you prefer to play down these human impacts; as a “crazy idea”, or by implying that ‘a little’ amplified human nastiness is the necessary cost of being in the big multinational business of connecting everyone and ‘socializing’ everything.

But did you ask the father of 14-year-old Molly Russell, a British schoolgirl who took her own life in 2017, whether he’s okay with your growth vs controls trade-off? “I have no doubt that Instagram helped kill my daughter,” said Russell in an interview with the BBC this week.

After her death, Molly’s parents found she had been following accounts on Instagram that were sharing graphic material related to self-harming and suicide, including some accounts that actively encourage people to cut themselves. “We didn’t know that anything like that could possibly exist on a platform like Instagram,” said Russell.

Without a human editor in the mix, your algorithmic recommendations are blind to risk and suffering. Built for global scale, they get on with the expansionist goal of maximizing clicks and views by serving more of the same sticky stuff. And more extreme versions of things users show an interest in to keep the eyeballs engaged.

So when you write about making services that “billions” of “people around the world love and use” forgive us for thinking that sounds horribly glib. The scales of suffering don’t sum like that. If your entertainment product has whipped up genocide anywhere in the world — as the UN said Facebook did in Myanmar — it’s failing regardless of the proportion of users who are having their time pleasantly wasted on and by Facebook.

And if your algorithms can’t incorporate basic checks and safeguards so they don’t accidentally encourage vulnerable teens to commit suicide you really don’t deserve to be in any consumer-facing business at all.

Yet your article shows no sign you’ve been reflecting on the kinds of human tragedies that don’t just play out on your platform but can be an emergent property of your targeting algorithms.

You focus instead on what you call “clear benefits to this business model”.

The benefits to Facebook’s business are certainly clear. You have the billions in quarterly revenue to stand that up. But what about the costs to the rest of us? Human costs are harder to quantify but you don’t even sound like you’re trying.

You do write that you’ve heard “many questions” about Facebook’s business model. Which is most certainly true but once again you’re playing down the level of political and societal concern about how your platform operates (and how you operate your platform) — deflecting and reframing what Facebook is to cast your ad business a form of quasi philanthropy; a comfortable discussion topic and self-serving idea you’d much prefer we were all sold on.

It’s also hard to shake the feeling that your phrasing at this point is intended as a bit of an in-joke for Facebook staffers — to smirk at the ‘dumb politicians’ who don’t even know how Facebook makes money.

Y’know, like you smirked…

Then you write that you want to explain how Facebook operates. But, thing is, you don’t explain — you distract, deflect, equivocate and mislead, which has been your business’ strategy through many months of scandal (that and worst tactics — such as paying a PR firm that used oppo research tactics to discredit Facebook critics with smears).

Dodging is another special power; such as how you dodged repeat requests from international parliamentarians to be held accountable for major data misuse and security breaches.

The Zuckerberg ‘open letter’ mansplain, which typically runs to thousands of blame-shifting words, is another standard issue production from the Facebook reputation crisis management toolbox.

And here you are again, ironically enough, mansplaining in a newspaper; an industry that your platform has worked keenly to gut and usurp, hungry to supplant editorially guided journalism with the moral vacuum of algorithmically geared space-filler which, left unchecked, has been shown, time and again, lifting divisive and damaging content into public view.

The latest Zuckerberg screed has nothing new to say. It’s pure spin. We’ve read scores of self-serving Facebook apologias over the years and can confirm Facebook’s founder has made a very tedious art of selling abject failure as some kind of heroic lack of perfection.

But the spin has been going on for far, far too long. Fifteen years, as you remind us. Yet given that hefty record it’s little wonder you’re moved to pen again — imagining that another word blast is all it’ll take for the silly politicians to fall in line.

Thing is, no one is asking Facebook for perfection, Mark. We’re looking for signs that you and your company have a moral compass. Because the opposite appears to be true. (Or as one UK parliamentarian put it to your CTO last year: “I remain to be convinced that your company has integrity”.)

Facebook has scaled to such an unprecedented, global size exactly because it has no editorial values. And you say again now you want to be all things to all men. Put another way that means there’s a moral vacuum sucking away at your platform’s core; a supermassive ethical blackhole that scales ad dollars by the billions because you won’t tie the kind of process knots necessary to treat humans like people, not pairs of eyeballs.

You don’t design against negative consequences or to pro-actively avoid terrible impacts — you let stuff happen and then send in the ‘trust & safety’ team once the damage has been done.

You might call designing against negative consequences a ‘growth bottleneck’; others would say it’s having a conscience.

Everything standing in the way of scaling Facebook’s usage is, under the Zuckerberg regime, collateral damage — hence the old mantra of ‘move fast and break things’ — whether it’s social cohesion, civic values or vulnerable individuals.

This is why it takes a celebrity defamation lawsuit to force your company to dribble a little more resource into doing something about scores of professional scammers paying you to pop their fraudulent schemes in a Facebook “ads” wrapper. (Albeit, you’re only taking some action in the UK in this particular case.)

Funnily enough — though it’s not at all funny and it doesn’t surprise us — Facebook is far slower and patchier when it comes to fixing things it broke.

Of course there will always be people who thrive with a digital megaphone like Facebook thrust in their hand. Scammers being a pertinent example. But the measure of a civilized society is how it protects those who can’t defend themselves from targeted attacks or scams because they lack the protective wrap of privilege. Which means people who aren’t famous. Not public figures like Martin Lewis, the consumer champion who has his own platform and enough financial resources to file a lawsuit to try to make Facebook do something about how its platform supercharges scammers.

Zuckerberg’s slippery call to ‘fight bad content with more content’ — or to fight Facebook-fuelled societal division by shifting even more of the apparatus of civic society onto Facebook — fails entirely to recognize this asymmetry.

And even in the Lewis case, Facebook remains a winner; Lewis dropped his suit and Facebook got to make a big show of signing over £500k worth of ad credit coupons to a consumer charity that will end up giving them right back to Facebook.

The company’s response to problems its platform creates is to look the other way until a trigger point of enough bad publicity gets reached. At which critical point it flips the usual crisis PR switch and sends in a few token clean up teams — who scrub a tiny proportion of terrible content; or take down a tiny number of fake accounts; or indeed make a few token and heavily publicized gestures — before leaning heavily on civil society (and on users) to take the real strain.

You might think Facebook reaching out to respected external institutions is a positive step. A sign of a maturing mindset and a shift towards taking greater responsibility for platform impacts. (And in the case of scam ads in the UK it’s donating £3M in cash and ad credits to a bona fide consumer advice charity.)

But this is still Facebook dumping problems of its making on an already under-resourced and over-worked civic sector at the same time as its platform supersizes their workload.

In recent years the company has also made a big show of getting involved with third party fact checking organizations across various markets — using these independents to stencil in a PR strategy for ‘fighting fake news’ that also entails Facebook offloading the lion’s share of the work. (It’s not paying fact checkers anything, given the clear conflict that would represent it obviously can’t).

So again external organizations are being looped into Facebook’s mess — in this case to try to drain the swamp of fakes being fenced and amplified on its platform — even as the scale of the task remains hopeless, and all sorts of junk continues to flood into and pollute the public sphere.

What’s clear is that none of these organizations has the scale or the resources to fix problems Facebook’s platform creates. Yet it serves Facebook’s purposes to be able to point to them trying.

And all the while Zuckerberg is hard at work fighting to fend off regulation that could force his company to take far more care and spend far more of its own resources (and profits) monitoring the content it monetizes by putting it in front of eyeballs.

The Facebook founder is fighting because he knows his platform is a targeted attack; On individual attention, via privacy-hostile behaviorally targeted ads (his euphemism for this is “relevant ads”); on social cohesion, via divisive algorithms that drive outrage in order to maximize platform engagement; and on democratic institutions and norms, by systematically eroding consensus and the potential for compromise between the different groups that every society is comprised of.

In his WSJ post Zuckerberg can only claim Facebook doesn’t “leave harmful or divisive content up”. He has no defence against Facebook having put it up and enabled it to spread in the first place.

Sociopaths relish having a soapbox so unsurprisingly these people find a wonderful home on Facebook. But where does empathy fit into the antisocial media equation?

As for Facebook being a ‘free’ service — a point Zuckerberg is most keen to impress in his WSJ post — it’s of course a cliché to point out that ‘if it’s free you’re the product’. (Or as the even older saying goes: ‘There’s no such thing as a free lunch’).

But for the avoidance of doubt, “free” access does not mean cost-free access. And in Facebook’s case the cost is both individual (to your attention and your privacy); and collective (to the public’s attention and to social cohesion).

The much bigger question is who actually benefits if “everyone” is on Facebook, as Zuckerberg would prefer. Facebook isn’t the Internet. Facebook doesn’t offer the sole means of communication, digital or otherwise. People can, and do, ‘connect’ (if you want to use such a transactional word for human relations) just fine without Facebook.

So beware the hard and self-serving sell in which Facebook’s 15-year founder seeks yet again to recast privacy as an unaffordable luxury.

Actually, Mark, it’s a fundamental human right.

The best argument Zuckerberg can muster for his goal of universal Facebook usage being good for anything other than his own business’ bottom line is to suggest small businesses could use that kind of absolute reach to drive extra growth of their own.

Though he only provides a few general data-points to support the claim; saying there are “more than 90M small businesses on Facebook” which “make up a large part of our business” (how large?) — and claiming “most” (51%?) couldn’t afford TV ads or billboards (might they be able to afford other online or newspaper ads though?); he also cites a “global survey” (how many businesses surveyed?), presumably run by Facebook itself, which he says found “half the businesses on Facebook say they’ve hired more people since they joined” (but how did you ask the question, Mark?; we’re concerned it might have been rather leading), and from there he leaps to the implied conclusion that “millions” of jobs have essentially been created by Facebook.

But did you control for common causes Mark? Or are you just trying to take credit for others’ hard work because, well, it’s politically advantageous for you to do so?

Whether Facebook’s claims about being great for small business stand up to scrutiny or not, if people’s fundamental rights are being wholesale flipped for SMEs to make a few extra bucks that’s an unacceptable trade off.

“Millions” of jobs suggestively linked to Facebook sure sounds great — but you can’t and shouldn’t overlook disproportionate individual and societal costs, as Zuckerberg is urging policymakers to here.

Let’s also not forget that some of the small business ‘jobs’ that Facebook’s platform can take definitive and major credit for creating include the Macedonia teens who became hyper-adept at seeding Facebook with fake U.S. political news, around the 2016 presidential election. But presumably those aren’t the kind of jobs Zuckerberg is advocating for.

He also repeats the spurious claim that Facebook gives users “complete control” over what it does with personal information collected for advertising.

We’ve heard this time and time again from Zuckerberg and yet it remains pure BS.

WASHINGTON, DC – APRIL 10: Facebook co-founder, Chairman and CEO Mark Zuckerberg concludes his testimony before a combined Senate Judiciary and Commerce committee hearing in the Hart Senate Office Building on Capitol Hill April 10, 2018 in Washington, DC. Zuckerberg, 33, was called to testify after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Win McNamee/Getty Images)

Yo Mark! First up we’re still waiting for your much trumpeted ‘Clear History’ tool. You know, the one you claimed you thought of under questioning in Congress last year (and later used to fend off follow up questions in the European Parliament).

Reportedly the tool is due this Spring. But even when it does finally drop it represents another classic piece of gaslighting by Facebook, given how it seeks to normalize (and so enable) the platform’s pervasive abuse of its users’ data.

Truth is, there is no master ‘off’ switch for Facebook’s ongoing surveillance. Such a switch — were it to exist — would represent a genuine control for users. But Zuckerberg isn’t offering it.

Instead his company continues to groom users into accepting being creeped on by offering pantomime settings that boil down to little more than privacy theatre — if they even realize they’re there.

‘Hit the button! Reset cookies! Delete browsing history! Keep playing Facebook!’

An interstitial reset is clearly also a dilute decoy. It’s not the same as being able to erase all extracted insights Facebook’s infrastructure continuously mines from users, using these derivatives to target people with behavioral ads; tracking and profiling on an ongoing basis by creeping on browsing activity (on and off Facebook), and also by buying third party data on its users from brokers.

Multiple signals and inferences are used to flesh out individual ad profiles on an ongoing basis, meaning the files are never static. And there’s simply no way to tell Facebook to burn your digital ad mannequin. Not even if you delete your Facebook account.

Nor, indeed, is there a way to get a complete read out from Facebook on all the data it’s attached to your identity. Even in Europe, where companies are subject to strict privacy laws that place a legal requirement on data controllers to disclose all personal data they hold on a person on request, as well as who they’re sharing it with, for what purposes, under what legal grounds.

Last year Paul-Olivier Dehaye, the founder of PersonalData.IO, a startup that aims to help people control how their personal data is accessed by companies, recounted in the UK parliament how he’d spent years trying to obtain all his personal information from Facebook — with the company resorting to legal arguments to block his subject access request.

Dehaye said he had succeeded in extracting a bit more of his data from Facebook than it initially handed over. But it was still just a “snapshot”, not an exhaustive list, of all the advertisers who Facebook had shared his data with. This glimpsed tip implies a staggeringly massive personal data iceberg lurking beneath the surface of each and every one of the 2.2BN+ Facebook users. (Though the figure is likely even more massive because it tracks non-users too.)

Zuckerberg’s “complete control” wording is therefore at best self-serving and at worst an outright lie. Facebook’s business has complete control of users by offering only a superficial layer of confusing and fiddly, ever-shifting controls that demand continued presence on the platform to use them, and ongoing effort to keep on top of settings changes (which are always, to a fault, privacy hostile), making managing your personal data a life-long chore.

Facebook’s power dynamic puts the onus squarely on the user to keep finding and hitting reset button.

But this too is a distraction. Resetting anything on its platform is largely futile, given Facebook retains whatever behavioral insights it already stripped off of your data (and fed to its profiling machinery). And its omnipresent background snooping carries on unchecked, amassing fresh insights you also can’t clear.

Nor does Clear History offer any control for the non-users Facebook tracks via the pixels and social plug-ins it’s larded around the mainstream web. Zuckerberg was asked about so-called shadow profiles in Congress last year — which led to this awkward exchange where he claimed not to know what the phrase refers to.

EU MEPs also seized on the issue, pushing him to respond. He did so by attempting to conflate surveillance and security — by claiming it’s necessary for Facebook to hold this data to keep “bad content out”. Which seems a bit of an ill-advised argument to make given how badly that mission is generally going for Facebook.

Still, Zuckerberg repeats the claim in the WSJ post, saying information collected for ads is “generally important for security and operating our services” — using this to address what he couches as “the important question of whether the advertising model encourages companies like ours to use and store more information than we otherwise would”.

So, essentially, Facebook’s founder is saying that the price for Facebook’s existence is pervasive surveillance of everyone, everywhere, with or without your permission.

Though he doesn’t express that ‘fact’ as a cost of his “free” platform. RIP privacy indeed.

Another pertinent example of Zuckerberg simply not telling the truth when he wrongly claims Facebook users can control their information vis-a-vis his ad business — an example which also happens to underline how pernicious his attempts to use “security” to justify eroding privacy really are — bubbled into view last fall, when Facebook finally confessed that mobile phone numbers users had provided for the specific purpose of enabling two-factor authentication (2FA) to increase the security of their accounts were also used by Facebook for ad targeting.

A company spokesperson told us that if a user wanted to opt out of the ad-based repurposing of their mobile phone data they could use non-phone number based 2FA — though Facebook only added the ability to use an app for 2FA in May last year.

What Facebook is doing on the security front is especially disingenuous BS in that it risks undermining security practice by bundling a respected tool (2FA) with ads that creep on people.

And there’s plenty more of this kind of disingenuous nonsense in Zuckerberg’s WSJ post — where he repeats a claim we first heard him utter last May, at a conference in Paris, when he suggested that following changes made to Facebook’s consent flow, ahead of updated privacy rules coming into force in Europe, the fact European users had (mostly) swallowed the new terms, rather than deleting their accounts en masse, was a sign people were majority approving of “more relevant” (i.e more creepy) Facebook ads.

Au contraire, it shows nothing of the sort. It simply underlines the fact Facebook still does not offer users a free and fair choice when it comes to consenting to their personal data being processed for behaviorally targeted ads — despite free choice being a requirement under Europe’s General Data Protection Regulation (GDPR).

If Facebook users are forced to ‘choose’ between being creeped on or deleting their account on the dominant social service where all their friends are it’s hardly a free choice. (And GDPR complaints have been filed over this exact issue of ‘forced consent‘.)

Add to that, as we said at the time, Facebook’s GDPR tweaks were lousy with manipulative, dark pattern design. So again the company is leaning on users to get the outcomes it wants.

It’s not a fair fight, any which way you look at it. But here we have Zuckerberg, the BS salesman, trying to claim his platform’s ongoing manipulation of people already enmeshed in the network is evidence for people wanting creepy ads.

darkened facebook logo

The truth is that most Facebook users remain unaware of how extensively the company creeps on them (per this recent Pew research). And fiddly controls are of course even harder to get a handle on if you’re sitting in the dark.

Zuckerberg appears to concede a little ground on the transparency and control point when he writes that: “Ultimately, I believe the most important principles around data are transparency, choice and control.” But all the privacy-hostile choices he’s made; and the faux controls he’s offered; and the data mountain he simply won’t ‘fess up to sitting on shows, beyond reasonable doubt, the company cannot and will not self-regulate.

If Facebook is allowed to continue setting its own parameters and choosing its own definitions (for “transparency, choice and control”) users won’t have even one of the three principles, let alone the full house, as well they should. Facebook will just keep moving the goalposts and marking its own homework.

You can see this in the way Zuckerberg fuzzes and elides what his company really does with people’s data; and how he muddies and muddles uses for the data — such as by saying he doesn’t know what shadow profiles are; or claiming users can download ‘all their data’; or that ad profiles are somehow essential for security; or by repurposing 2FA digits to personalize ads too.

How do you try to prevent the purpose limitation principle being applied to regulate your surveillance-reliant big data ad business? Why by mixing the data streams of course! And then trying to sew confusion among regulators and policymakers by forcing them to unpick your mess.

Much like Facebook is forcing civic society to clean up its messy antisocial impacts.

Europe’s GDPR is focusing the conversation, though, and targeted complaints filed under the bloc’s new privacy regime have shown they can have teeth and so bite back against rights incursions.

But before we put another self-serving Zuckerberg screed to rest, let’s take a final look at his description of how Facebook’s ad business works. Because this is also seriously misleading. And cuts to the very heart of the “transparency, choice and control” issue he’s quite right is central to the personal data debate. (He just wants to get to define what each of those words means.)

In the article, Zuckerberg claims “people consistently tell us that if they’re going to see ads, they want them to be relevant”. But who are these “people” of which he speaks? If he’s referring to the aforementioned European Facebook users, who accepted updated terms with the same horribly creepy ads because he didn’t offer them any alternative, we would suggest that’s not a very affirmative signal.

Now if it were true that a generic group of ‘Internet people’ were consistently saying anything about online ads the loudest message would most likely be that they don’t like them. Click through rates are fantastically small. And hence also lots of people using ad blocking tools. (Growth in usage of ad blockers has also occurred in parallel with the increasing incursions of the adtech industrial surveillance complex.)

So Zuckerberg’s logical leap to claim users of free services want to be shown only the most creepy ads is really a very odd one.

Let’s now turn to Zuckerberg’s use of the word “relevant”. As we noted above, this is a euphemism. It conflates many concepts but principally it’s used by Facebook as a cloak to shield and obscure the reality of what it’s actually doing (i.e. privacy-hostile people profiling to power intrusive, behaviourally microtargeted ads) in order to avoid scrutiny of exactly those creepy and intrusive Facebook practices.

Yet the real sleight of hand is how Zuckerberg glosses over the fact that ads can be relevant without being creepy. Because ads can be contextual. They don’t have to be behaviorally targeted.

Ads can be based on — for example — a real-time search/action plus a user’s general location. Without needing to operate a vast, all-pervasive privacy-busting tracking infrastructure to feed open-ended surveillance dossiers on what everyone does online, as Facebook chooses to.

And here Zuckerberg gets really disingenuous because he uses a benign-sounding example of a contextual ad (the example he chooses contains an interest and a general location) to gloss over a detail-light explanation of how Facebook’s people tracking and profiling apparatus works.

“Based on what pages people like, what they click on, and other signals, we create categories — for example, people who like pages about gardening and live in Spain — and then charge advertisers to show ads to that category,” he writes, with that slipped in reference to “other signals” doing some careful shielding work there.

Other categories that Facebook’s algorithms have been found ready and willing to accept payment to run ads against in recent years include “jew-hater”, “How to burn Jews” and “Hitler did nothing wrong”.

Funnily enough Zuckerberg doesn’t mention those actual Facebook microtargeting categories in his glossy explainer of how its “relevant” ads business works. But they offer a far truer glimpse of the kinds of labels Facebook’s business sticks on people.

As we wrote last week, the case against behavioral ads is stacking up. Zuckerberg’s attempt to spin the same self-serving lines should really fool no one at this point.

Nor should regulators be derailed by the lie that Facebook’s creepy business model is the only version of adtech possible. It’s not even the only version of profitable adtech currently available. (Contextual ads have made Google alternative search engine DuckDuckGo profitable since 2014, for example.)

Simply put, adtech doesn’t have to be creepy to work. And ads that don’t creep on people would give publishers greater ammunition to sell ad block using readers on whitelisting their websites. A new generation of people-sensitive startups are also busy working on new forms of ad targeting that bake in privacy by design.

And with legal and regulatory risk rising, intrusive and creepy adtech that demands the equivalent of ongoing strip searches of every Internet user on the planet really look to be on borrowed time.

Facebook’s problem is it scrambled for big data and, finding it easy to suck up tonnes of the personal stuff on the unregulated Internet, built an antisocial surveillance business that needs to capture both sides of its market — eyeballs and advertisers — and keep them buying to an exploitative and even abusive relationship for its business to keep minting money.

Pivoting that tanker would certainly be tough, and in any case who’d trust a Zuckerberg who suddenly proclaimed himself the privacy messiah?

But it sure is a long way from ‘move fast and break things’ to trying to claim there’s only one business model to rule them all.


Source: The Tech Crunch

Read More

In revamped transparency report, Apple reveals uptick in demands for user data

Posted by on Dec 20, 2018 in apple inc, espionage, Government, law enforcement, mass surveillance, national security, Politics, Privacy, surveillance, transparency report | 0 comments

Apple’s transparency report just got a lot more — well, transparent.

For years, the technology giant released a twice-a-year report on the number of government demands it received. It wasn’t much to look at in the beginning; a seven-page document with only two tables of data. Once in a while, Apple would tack on a new table of data as the government would ask for new kinds of customer data.

But that wasn’t sustainable, nor was it particularly easy to read — especially for the hawkish handful who would obsessively read and digest each report.

As other companies, like Microsoft and Google, received more demands over the years, they began to expand their own reports to help users to better understand who wanted their data, why and how often. Apple knew its document-only reports didn’t cut it, and took a leaf from its Silicon Valley neighbors and pushed ahead with its own plan to publish its biannual numbers in a way that ordinary people — like its customers — can read and understand.

The company’s latest transparency report, out Thursday, still comes in its traditional PDF format for those who don’t like change, but now also has its own dedicated, browsable and interactive corner of Apple’s website. The new site breaks down the figures by country — but also historically to provide trends, patterns and context over years’ worth of reporting cycles, in a way that’s more in line with how other tech giants report their government data demands.

And, the company has CSV files for download, containing raw data for academics to drill deeper down into the numbers.

Apple has also reworked how it discloses national security requests, such as FBI-issued subpoenas like national security letters (NSLs) and orders issued by the Foreign Intelligence Surveillance Court (FISA). Since the introduction of the Freedom Act in 2015, passed in response to the NSA surveillance scandal in 2013, companies were given three options of reporting their secret orders — including the numerical bands it can release under what time period. Most companies disclosed the secret requests in bands of 500 with a six-month reporting delay to avoid any inadvertent interference with active investigations. Apple originally released its figures in bands of 250 requests, but is now expanding that to bands of 500 requests to standardize its reporting with other tech companies. It’s also breaking out its FISA content (such as photos, email, contacts and device backups) and non-content requests (like subscriber records and transactional logs).

As for the figures, the transparency report reveals a rise in worldwide demands for data.

According to the report, Apple received 32,342 demands — up 9 percent on the last reporting period — to access 163,823 devices in the second half of the year.

The report found Germany as the top requester, issuing 13,704 requests for data on 26,160 devices. Apple said that the figures were due to the high volume of device requests due to stolen devices. The U.S. was in second place with 4,570 requests for 14,911 devices.

Apple also received 4,177 requests for account data, such as information stored in iCloud — up by almost 25 percent on the previous reporting period — affecting some 40,641 accounts, a four-fold increase. The company said the spike was attributable to China, which asked for thousands of devices’ worth of data under a single fraud investigation.

And, the company saw a 30 percent increase in requests to preserve data for up to three months to 1,579 cases, affecting 4,033 accounts, while law enforcement obtained the right legal process to access the data.

The company also said it received between 0 and 499 national security orders, including secret rulings from the Foreign Intelligence Surveillance Court, affecting 1,000 and 1,499 accounts. As the company is subject to a six-month reporting delay, the updated figures are expected out in the new year.

Apple did not reveal in this latest report any national security letters where the gag orders were lifted.


Source: The Tech Crunch

Read More

3D-printed heads let hackers – and cops – unlock your phone

Posted by on Dec 16, 2018 in 3d printing, biometrics, Face ID, facial recognition, facial recognition software, Hack, Identification, iOS, iPhone, learning, Mobile, model, Prevention, Privacy, Security, surveillance | 0 comments

There’s a lot you can make with a 3D printer: from prosthetics, corneas, and firearms — even an Olympic-standard luge.

You can even 3D print a life-size replica of a human head — and not just for Hollywood. Forbes reporter Thomas Brewster commissioned a 3D printed model of his own head to test the face unlocking systems on a range of phones — four Android models and an iPhone X.

Bad news if you’re an Android user: only the iPhone X defended against the attack.

Gone, it seems, are the days of the trusty passcode, which many still find cumbersome, fiddly, and inconvenient — especially when you unlock your phone dozens of times a day. Phone makers are taking to the more convenient unlock methods. Even if Google’s latest Pixel 3 shunned facial recognition, many Android models — including popular Samsung devices — are relying more on your facial biometrics. In its latest models, Apple effectively killed its fingerprint-reading Touch ID in favor of its newer Face ID.

But that poses a problem for your data if a mere 3D-printed model can trick your phone into giving up your secrets. That makes life much easier for hackers, who have no rulebook to go from. But what about the police or the feds, who do?

It’s no secret that biometrics — your fingerprints and your face — aren’t protected under the Fifth Amendment. That means police can’t compel you to give up your passcode, but they can forcibly depress your fingerprint to unlock your phone, or hold it to your face while you’re looking at it. And the police know it — it happens more often than you might realize.

But there’s also little in the way of stopping police from 3D printing or replicating a set of biometrics to break into a phone.

“Legally, it’s no different from using fingerprints to unlock a device,” said Orin Kerr, professor at USC Gould School of Law, in an email. “The government needs to get the biometric unlocking information somehow,” by either the finger pattern shape or the head shape, he said.

Although a warrant “wouldn’t necessarily be a requirement” to get the biometric data, one would be needed to use the data to unlock a device, he said.

Jake Laperruque, senior counsel at the Project On Government Oversight, said it was doable but isn’t the most practical or cost-effective way for cops to get access to phone data.

“A situation where you couldn’t get the actual person but could use a 3D print model may exist,” he said. “I think the big threat is that a system where anyone — cops or criminals — can get into your phone by holding your face up to it is a system with serious security limits.”

The FBI alone has thousands of devices in its custody — even after admitting the number of encrypted devices is far lower than first reported. With the ubiquitous nature of surveillance, now even more powerful with high-resolution cameras and facial recognition software, it’s easier than ever for police to obtain our biometric data as we go about our everyday lives.

Those cheering on the “death of the password” might want to think again. They’re still the only thing that’s keeping your data safe from the law.


Source: The Tech Crunch

Read More

Lawmakers say Amazon’s facial recognition software may be racially biased and harm free expression

Posted by on Nov 30, 2018 in Amazon, biometrics, facial recognition, facial recognition software, Florida, Government, Publishing, Security, surveillance, U.S. government | 0 comments

Amazon has “failed to provide sufficient answers” about its controversial facial recognition software, Rekognition — and lawmakers won’t take the company’s usual silent treatment for an answer.

The letter, signed by eight lawmakers — including Sen. Edward Markey and Reps. John Lewis and Judy Chu — called on Amazon chief executive Jeff Bezos to explain how the company’s technology works — and where it will be used.

It comes after the cloud and retail giant secured several high-profile contracts with the U.S. government and at least one major metropolitan city — including Orlando, Florida — for surveillance.

The lawmakers said that they expressed a “heightened concern given recent reports that Amazon is actively marketing its biometric technology to U.S. Immigration and Customs Enforcement, as well as other reports of pilot programs lacking any hands-on training from Amazon for participating law enforcement officers.”

They also said that the system suffers from accuracy issues — which could lead to racial bias, and could harm citizens’ constitutional rights to free expression.

“However, at this time, we have serious concerns that this type of product has significant accuracy issues, places disproportionate burdens on communities of color, and could stifle Americans’ willingness to exercise their First Amendment rights in public,” the letter said.

The lawmakers want Amazon to explain how Amazon tests for accuracy and if those tests have been independently verified — and how the company tests for bias.

It comes after the ACLU found that the software failed to facially recognize 28 members of Congress, with a higher failure rate towards people of color.

The facial recognition software has been controversial from the start. Even after concerns from its own employees, Amazon said it would push ahead and sell the technology regardless.

Amazon has a little over two weeks to respond to the lawmakers. A spokesperson for Amazon did not respond to a request for comment.


Source: The Tech Crunch

Read More

Five years and one pivot later, Trueface emerges with a promise for better facial recognition tech

Posted by on Nov 21, 2018 in africa, Asia, Colombia, facial recognition, facial recognition software, harvard, learning, Medellin, Scout Ventures, Southeast Asia, surveillance, TC, video surveillance | 0 comments

Shaun Moore and Nezare Chafni didn’t initially intend to develop a new standalone facial recognition technology, when they first got started developing the technology that would become their new company, Trueface.ai.

When the two serial entrepreneurs were planning their next act five years ago, they wanted to ride the wave of smart home technologies with the development of a new smart doorbell — called Chui.

That doorbell would be equipped with facial recognition software as a service. The company raised $500,000 in angel funding and opened a manufacturing facility in Medellin, Colombia.

What the two entrepreneurs discovered was that most existing facial recognition tools lacked the ability to identify spoof or presentation attacks, which rendered the tech unfeasible for the access control functions they were trying to develop.

So Moore and Chafni set out to develop better software for facial recognition.

 

“In 2014 we focused our engineering efforts on deploying face recognition on the edge in highly constrained environments that could identify hack or spoof attempts,” Moore, the chief executive of Trueface.ai said in an email. “This technology is the core of what has become Trueface.”

With the upgrades to the product, Chui began tackling the commercial access control market, and while customers loved the software, they wanted to use their own hardware for the product, according to Moore.

So the two entrepreneurs shuttered the factory in 2017 and began focusing on selling the facial recognition product on its own. Thus Trueface was born.

It’s actually the third company that the two founders have worked on together. Friends since their days studying business at Southern Methodist University, Moore and Chafni previously worked on a content management startup, before moving on to Chui’s smart doorbell.

The company spun Trueface out of Chui in June 2017 and raised seed capital from investors including Scout Ventures with Harvard Business Angels and GSV Labs. That $1.5 million round has powered the company’s development since (including the integration with IFTT earlier this year to prove that its system worked).

But over the past few years, as damning stories around the risks associated with potentially bad training data being applied to facial recognition technologies continued to appear, the company set itself another task — aligning its training data with the real world.

To that end the company has partnered with a global non-profit which is collecting facial images from Africa, Asia and Southeast Asia to create a more robust portfolio of images to train its recognition software.

“Like many facial recognition companies, we acknowledge the implicit bias in publicly available training data that can result in misidentification of certain ethnicities,” the company’s chief executive has written. “We think that is unacceptable, and have pioneered methods to collect a multiplicity of anonymized face data from around the world in order to balance our training models. For example, we partnered with non-profits in Africa and Southeast Asia to ensure our training data is diverse and inclusive, resulting in reduced bias and more accurate face recognition – for all.

The company has also established three principles by which its technology will be applied. The first is an explicit commitment to reduce bias in training data; the second, an agreement with its customers that in any case that goes to court, human decision making is privileged over any data from its software; and finally, an explicit focus on data security to prevent breaches and data transparency so that customers discloes what information they’re collecting.

“When implemented responsibly, people will demand this technology for its daily benefits and utility, not fear it,” writes Moore.


Source: The Tech Crunch

Read More

Khashoggi’s fate shows the flip side of the surveillance state

Posted by on Oct 20, 2018 in Edward Snowden, Government, Jamal Khashoggi, law enforcement, mass surveillance, Mohammed Bin Salman, national security, Privacy, russia, Saudi Arabia, Security, Softbank, Storage, surveillance, TC, trump, Turkey, Venture Capital, Vision Fund, Visual Computing | 0 comments

It’s been over five years since NSA whistleblower Edward Snowden lifted the lid on government mass surveillance programs, revealing, in unprecedented detail, quite how deep the rabbit hole goes thanks to the spread of commercial software and connectivity enabling a bottomless intelligence-gathering philosophy of ‘bag it all’.

Yet technology’s onward march has hardly broken its stride.

Government spying practices are perhaps more scrutinized, as a result of awkward questions about out-of-date legal oversight regimes. Though whether the resulting legislative updates, putting an official stamp of approval on bulk and/or warrantless collection as a state spying tool, have put Snowden’s ethical concerns to bed seems doubtful — albeit, it depends on who you ask.

The UK’s post-Snowden Investigatory Powers Act continues to face legal challenges. And the government has been forced by the courts to unpick some of the powers it helped itself to vis-à-vis people’s data. But bulk collection, as an official modus operandi, has been both avowed and embraced by the state.

In the US, too, lawmakers elected to push aside controversy over a legal loophole that provides intelligence agencies with a means for the warrantless surveillance of American citizens — re-stamping Section 702 of FISA for another six years. So of course they haven’t cared a fig for non-US citizens’ privacy either.

Increasingly powerful state surveillance is seemingly here to stay, with or without adequately robust oversight. And commercial use of strong encryption remains under attack from governments.

But there’s another end to the surveillance telescope. As I wrote five years ago, those who watch us can expect to be — and indeed are being — increasingly closely watched themselves as the lens gets turned on them:

“Just as our digital interactions and online behaviour can be tracked, parsed and analysed for problematic patterns, pertinent keywords and suspicious connections, so too can the behaviour of governments. Technology is a double-edged sword – which means it’s also capable of lifting the lid on the machinery of power-holding institutions like never before.”

We’re now seeing some of the impacts of this surveillance technology cutting both ways.

With attention to detail, good connections (in all senses) and the application of digital forensics all sorts of discrete data dots can be linked — enabling official narratives to be interrogated and unpicked with technology-fuelled speed.

Witness, for example, how quickly the Kremlin’s official line on the Skripal poisonings unravelled.

After the UK released CCTV of two Russian suspects of the Novichok attack in Salisbury, last month, the speedy counter-claim from Russia, presented most obviously via an ‘interview’ with the two ‘citizens’ conducted by state mouthpiece broadcaster RT, was that the men were just tourists with a special interest in the cultural heritage of the small English town.

Nothing to see here, claimed the Russian state, even though the two unlikely tourists didn’t appear to have done much actual sightseeing on their flying visit to the UK during the tail end of a British winter (unless you count vicarious viewing of Salisbury’s wikipedia page).

But digital forensics outfit Bellingcat, partnering with investigative journalists at The Insider Russia, quickly found plenty to dig up online, and with the help of data-providing tips. (We can only speculate who those whistleblowers might be.)

Their investigation made use of a leaked database of Russian passport documents; passport scans provided by sources; publicly available online videos and selfies of the suspects; and even visual computing expertise to academically cross-match photos taken 15 years apart — to, within a few weeks, credibly unmask the ‘tourists’ as two decorated GRU agents: Anatoliy Chepiga and Dr Alexander Yevgeniyevich Mishkin.

When public opinion is faced with an official narrative already lacking credibility that’s soon set against external investigation able to closely show workings and sources (where possible), and thus demonstrate how reasonably constructed and plausible is the counter narrative, there’s little doubt where the real authority is being shown to lie.

And who the real liars are.

That the Kremlin lies is hardly news, of course. But when its lies are so painstakingly and publicly unpicked, and its veneer of untruth ripped away, there is undoubtedly reputational damage to the authority of Vladimir Putin.

The sheer depth and availability of data in the digital era supports faster-than-ever evidence-based debunking of official fictions, threatening to erode rogue regimes built on lies by pulling away the curtain that invests their leaders with power in the first place — by implying the scope and range of their capacity and competency is unknowable, and letting other players on the world stage accept such a ‘leader’ at face value.

The truth about power is often far more stupid and sordid than the fiction. So a powerful abuser, with their workings revealed, can be reduced to their baser parts — and shown for the thuggish and brutal operator they really are, as well as proved a liar.

On the stupidity front, in another recent and impressive bit of cross-referencing, Bellingcat was able to turn passport data pertaining to another four GRU agents — whose identities had been made public by Dutch and UK intelligence agencies (after they had been caught trying to hack into the network of the Organisation for the Prohibition of Chemical Weapons) — into a long list of 305 suggestively linked individuals also affiliated with the same GRU military unit, and whose personal data had been sitting in a publicly available automobile registration database… Oops.

There’s no doubt certain governments have wised up to the power of public data and are actively releasing key info into the public domain where it can be poured over by journalists and interested citizen investigators — be that CCTV imagery of suspects or actual passport scans of known agents.

A cynic might call this selective leaking. But while the choice of what to release may well be self-serving, the veracity of the data itself is far harder to dispute. Exactly because it can be cross-referenced with so many other publicly available sources and so made to speak for itself.

Right now, we’re in the midst of another fast-unfolding example of surveillance apparatus and public data standing in the way of dubious state claims — in the case of the disappearance of Washington Post journalist Jamal Khashoggi, who went into the Saudi consulate in Istanbul on October 2 for a pre-arranged appointment to collect papers for his wedding and never came out.

Saudi authorities first tried to claim Khashoggi left the consulate the same day, though did not provide any evidence to back up their claim. And CCTV clearly showed him going in.

Yesterday they finally admitted he was dead — but are now trying to claim he died quarrelling in a fistfight, attempting to spin another after-the-fact narrative to cover up and blame-shift the targeted slaying of a journalist who had written critically about the Saudi regime.

Since Khashoggi went missing, CCTV and publicly available data has also been pulled and compared to identify a group of Saudi men who flew into Istanbul just prior to his appointment at the consulate; were caught on camera outside it; and left Turkey immediately after he had vanished.

Including naming a leading Saudi forensics doctor, Dr Salah Muhammed al-Tubaigy, as being among the party that Turkish government sources also told journalists had been carrying a bone saw in their luggage.

Men in the group have also been linked to Saudi crown prince Mohammed bin Salman, via cross-referencing travel records and social media data.

“In a 2017 video published by the Saudi-owned Al Ekhbariya on YouTube, a man wearing a uniform name tag bearing the same name can be seen standing next to the crown prince. A user with the same name on the Saudi app Menom3ay is listed as a member of the royal guard,” writes the Guardian, joining the dots on another suspected henchman.

A marked element of the Khashoggi case has been the explicit descriptions of his fate leaked to journalists by Turkish government sources, who have said they have recordings of his interrogation, torture and killing inside the building — presumably via bugs either installed in the consulate itself or via intercepts placed on devices held by the individuals inside.

This surveillance material has reportedly been shared with US officials, where it must be shaping the geopolitical response — making it harder for President Trump to do what he really wants to do, and stick like glue to a regional US ally with which he has his own personal financial ties, because the arms of that state have been recorded in the literal act of cutting off the fingers and head of a critical journalist, and then sawing up and disposing of the rest of his body.

Attempts by the Saudis to construct a plausible narrative to explain what happened to Khashoggi when he stepped over its consulate threshold to pick up papers for his forthcoming wedding have failed in the face of all the contrary data.

Meanwhile, the search for a body goes on.

And attempts by the Saudis to shift blame for the heinous act away from the crown prince himself are also being discredited by the weight of data…

And while it remains to be seen what sanctions, if any, the Saudis will face from Trump’s conflicted administration, the crown prince is already being hit where it hurts by the global business community withdrawing in horror from the prospect of being tainted by bloody association.

The idea that a company as reputation-sensitive as Apple would be just fine investing billions more alongside the Saudi regime, in SoftBank’s massive Vision Fund vehicle, seems unlikely, to say the least.

Thanks to technology’s surveillance creep the world has been given a close-up view of how horrifyingly brutal the Saudi regime can be — and through the lens of an individual it can empathize with and understand.

Safe to say, supporting second acts for regimes that cut off fingers and sever heads isn’t something any CEO would want to become famous for.

The power of technology to erode privacy is clearer than ever. Down to the very teeth of the bone saw. But what’s also increasingly clear is that powerful and at times terrible capability can be turned around to debase power itself — when authorities themselves become abusers.

So the flip-side of the surveillance state can be seen in the public airing of the bloody colors of abusive regimes.

Turns out, microscopic details can make all the difference to geopolitics.

RIP Jamal Khashoggi


Source: The Tech Crunch

Read More

T-Mobile quietly reveals uptick in government data demands

Posted by on Aug 27, 2018 in Government, national security, Security, supreme court, surveillance, t-mobile, transparency report | 0 comments

T-Mobile has revealed an uptick in the number of demands for data it receives from the government.

The cellular giant quietly posted its 2017 transparency report on August 14, revealing a 12 percent increase in the number of overall data demands it responded to compared to the previous year.

The report said the company responded to 219,377 subpoenas, an 11 percent rise on 2017. These demands were issued by federal agencies and do not require any judicial oversight. The company also responded to 55,372 court orders, a 13 percent rise, and 27,203 warrants, a rise of 19 percent.

But the number of wiretap orders — which allow police to listen in to calls in real time — went down by half on the previous year.

A spokesperson for T-Mobile told TechCrunch that the figures reflect a “typical increase of legal demands across the board” and that the increases are “consistent with past years.”

Although the results reveal more requests for customer data, the transparency report did not say how many customers were affected.

T-Mobile has 77 million users as of its second-quarter earnings.

Several tech companies began publishing how many government requests for customer data they received since Google’s debut report in 2010. But it was only after the Edward Snowden disclosures in 2013 that revealed mass surveillance by the National Security Agency when tech companies and telcos began regularly publishing transparency reports, seen as an effort to counter the damaging claims that companies helped the government spy.

T-Mobile became the last major cell carrier to issue a transparency report two years later in 2015.

The company also said that it responded to 64,266 requests by law enforcement for customers’ historical cell site data. That data became the focal point of the U.S. vs. Carpenter case earlier this year, in which the Supreme Court ruled that law enforcement must obtain a warrant for historical cell and location data. That figure is expected to fall during the 2018 reporting year as the new bar to obtain a court-signed warrant is higher.

T-Mobile also said it received 46,395 requests to track customers’ real-time location, and 4,855 warrants and orders for tower dumps, which police can use to obtain information on all the nearby devices connected to a cell tower during a particular period of time.

But the number of national security requests received declined during 2017.

The number of national security letters used by federal agents to obtain call records in secret and the number of orders granted by the secret Foreign Intelligence Surveillance Court were each below 1,000 requests for the full year.

Tech companies and telcos are highly restricted in how they can report the number of classified orders demanding customer data in secret, and can only report in ranges of requests they received.

Since the Freedom Act was signed into law in 2015, the Justice Department began allowing companies to report in narrower ranges.


Source: The Tech Crunch

Read More

RideAlong is helping police officers de-escalate 911 calls with data designed for the field

Posted by on Aug 14, 2018 in america, Government, law enforcement, policing, policing tech, public health, San Francisco, Seattle, surveillance, TC, Y Combinator | 0 comments

RideAlong keeps people in mind, and that’s a good thing. The company, founded by Meredith Hitchcock (COO) and Katherine Nammacher (CEO), aims to make streets safer, not with expansive surveillance systems or high-tech weaponry but with simple software focused on the people being policed. That distinction sounds small, but it’s surprisingly revelatory. Tech so oftens forgets the people that it’s ostensibly trying to serve, but with RideAlong they’re front and center.

“The thing about law enforcement is they are interacting with individuals who have been failed by the rest of society and social support networks,” Nammacher told TechCrunch in an interview. “We want to help create a dialogue toward a more perfect future for people who are having some really rough things happen to them. Police officers also want that future.”

Ridealong is specifically focused on serving populations that have frequent interactions with law enforcement. Those individuals are often affected by complex forces that require special care — particularly chemical dependence, mental illness and homelessness.

“I think it is universally understood if someone has a severe mental illness… putting them through the criminal justice system and housing them in a jail is not the right thing to do,” Nammacher said. For RideAlong, the question is how to help those individuals obtain long-term support from a system that isn’t really designed to adequately serve them.

Made for field work, RideAlong is a mobile responsive web app that presents relevant information on individuals who frequently use emergency services. It collects data that might otherwise only live in an officer’s personal notebook or a police report, presenting it on a call so that officers can use it to determine if an individual is in crisis and if they are, the best way to de-escalate their situation and provide support. With a simple interface and a no-frills design, RideAlong works everywhere from a precinct laptop to a smartphone in the field to a patrol car’s dash computer.

Nammacher explains that any police officer could easily think of the five people they interact with most often, recalling key details about them like their dog’s name and whether they are close to a known family member. That information is very valuable for responding to a crisis but it often isn’t accessible when it needs to be.

“They’ve come up with some really smart manual workarounds for how to deal with that,” Nammacher says, but it isn’t always enough. That real-time information gap is where RideAlong comes in.

How RideAlong works

RideAlong is designed so that police officers and other first responders can search its database by name and location but also by gender, height, weight, ethnicity and age. When a search hits a result in the system, RideAlong can help officers detect subtle shifts from a known baseline behavior. The hope is that even very basic contextual information can provide clues that mean a big difference in outcomes.

So far, it seems to be working. RideAlong has been live in Seattle for a year, with the Seattle Police Department’s 1,300 sworn officers using the software every day. Over the course of six months with RideAlong, Seattle and King County saw a 35% reduction in 911 calls. That decrease, interpreted as a sign of more efficient policing, translated into $407,000 in deferred costs for the city.

“It really assists with decision making, especially when it comes to crisis calls,” Seattle Police Sergeant Daniel Nelson told TechCrunch. Officers have a lot of discretion to do what they think is best based on the information available. “There is so much gray space.”

Ridealong has also partnered with the San Francisco Department of Public Health where a street medicine team is putting it to use in a pilot. West of Seattle, Kitsap County Sheriff’s Office is looking at RideAlong for its team of 300 officers.

What this looks like in practice: An officer responds to a call involving a person they known named Suzanne. They might remember that normally if they ask her about Suzanne’s dog it calms her down, but today it makes her upset. Rather than assuming that her agitated behavior is coming out of the blue, the responding officer could address concerns around Suzanne’s dog and help de-escalate the situation.

In another example, an officer responds to someone on the street who they perceive to be yelling and agitated. Checking contextual information in RideAlong could clarify that an individual just speaks loudly because they are hard of hearing, not in crisis. If someone is actually agitated and drawing helps them calm down, RideAlong will note that.

“RideAlong visualizes that data, so when somebody is using the app they can see, ‘okay this person has 50 contacts, they’ve been depressed, sad, crying,’” Nelson said. “Cops are really good at seeing behavior and describing behavior so that’s what we’re asking of them.”

The idea is that making personalized data like this easy to see can reduce the use of force in the field, calm someone down and open the door to connecting them social services and any existing support network.

“I’ve known all along that we’ve got incredible data, but it’s not getting out to the people on the streets,” said Maria X. Martinez, Director of Whole Person Care at San Francisco Department of Public Health. RideAlong worked directly with her department’s street medicine on a pilot program that gave clinicians access to key data while providing medical care in to the city’s homeless population.

Traditionally, street medicine workers go do their work in the field and return to look up the records for the people they interacted with. Now, those processes are combined and 15 different sets of relevant data gets pulled together and presented in the field, where workers can add to and annotate it. “It’s one thing to tell people to come back and enter their data… you sort of hope that that does happen,” Martinez said. With RideAlong, “You’ve already done both things: documented and given them the info.”

Forming RideAlong

The small team at RideAlong began when the co-founders met during a Code for America fellowship in 2016. They built the app in 2016 under the banner of a data-driven justice program during the Obama administration. Interest was immediate. The next year, Nammacher and Hitchcock spun the project out into its own company, became part of Y Combinator’s summer batch of startups and by July they launched a pilot program with the entire Seattle police department.

Neither co-founder planned on starting a company, but they were inspired by what they describe as a “real-time information gap” between people experiencing mental health crises and the people dispatched to help them and the level of interest from “agencies across the country, big and small” who wanted to buy their product.

“There’s been more of a push recently for quantitative data to be a more central force for decision making,” Nammacher said. The agencies RideAlong has worked with so far like how user friendly the software is and how it surfaces the data they already collect to make it more useful.

“At the end of the day, our users are both the city staff member and the person that they’re serving. We see them as equally valid and important.”


Source: The Tech Crunch

Read More

What we know about Maryland’s controversial facial recognition database

Posted by on Jun 29, 2018 in crime, facial recognition, Government, Privacy, surveillance, TC | 0 comments

When police had difficulty identifying the man whom they believed opened fire on a newsroom in Maryland, killing five people, they turned to one of the most controversial yet potent tools in the state’s law enforcement arsenal.

As The New York Times reports, Anne Arundel County Police Chief Timothy Altomare’s department failed to ID its suspect through fingerprinting. The department then sent a picture of the suspect to the Maryland Coordination and Analysis Center, which combed through one of the nation’s largest databases of mug shots and driver’s license photos in search of a match.

That database is the source of some debate. Maryland has some of the most aggressive facial recognition policies in the nation, according to a national report from Georgetown University’s Center on Privacy & Technology, and that practice is powered by one central system: a pool of face data known as the Maryland Image Repository System (MIRS).

For facial recognition searches, Maryland police have access to three million state mug shots, seven million state driver’s license photos and an additional 24.9 million mug shots from a national FBI database. The state’s practice of face recognition searches began in 2011, expanding in 2013 to incorporate the Maryland Motor Vehicle Administration’s existing driver’s license database. The Maryland Department of Public Safety and Correctional Services (DPSCS) describes MIRS “as a digitized mug shot book used by law enforcement agencies throughout Maryland in the furtherance of their law enforcement investigation duties.”

According to the Georgetown report, “It’s unclear if the [Maryland Department of Public Safety and Correctional Services] ‘scrubs’ its mug shot database to eliminate people who were never charged, had charges dropped or dismissed, or who were found innocent.”

In a letter to Maryland’s House Appropriations and Senate Budget and Taxation Committees in late 2017, DPSCS Secretary Stephen T. Moyer notes that the software “has drawn criticism over privacy concerns.” In that report, the state notes that images uploaded to MIRS are not stored in the database and that “the user’s search results are saved under their session and are not available to any other user.” DPSCS provides these details about the software:

MIRS is an off-the-shelf software program developed by Dataworks Plus. Images are uploaded into the system from MVA, DPSCS inmate case records, and mugshot photos sent into the DPSCS Criminal Justice System-Central Repository (CJIS-CR) from law enforcement agencies throughout the State at the time of an offender’s arrest and booking. Members of law enforcement are able to upload an image to MIRS and that image is compared to the images within the system to determine the highest probability that the uploaded image may relate to an MVA and/or DPSCS image within MIRS.

In the 2017 fiscal year, DPSCS paid DataWorks Plus $185,124.24 to maintain the database. The report declined to answer questions about how many users are authorized to access the MIRS system (estimates in The Baltimore Sun put it at between 6,000 and 7,000 individuals) and how many user logins had occurred since 2015, stating that it did not track or collect this information. On a question of what steps the department takes to mitigate privacy risks, DPSCS stated only that “the steps taken to protect citizen’s privacy are inherent in the photos that are uploaded into the system and the way that the system is accessed.”

In 2016, Maryland’s face recognition database came under new scrutiny after the ACLU accused the state of using MIRS without a warrant to identify protesters in Baltimore following the death of Freddie Gray.

Last year, Maryland House Bill 1065 proposed a task force to examine surveillance techniques used by law enforcement in the state. That bill made it out of the House but did not progress past the Senate Judicial Proceedings Committee. Another bill, known as the Face Recognition Act (HB 1148), would mandate auditing in the state to “ensure that face recognition is used only for legitimate law enforcement purposes” and would prohibit the use of Maryland’s face recognition system without a court order. That bill did not make it out of the House Judiciary Committee, though the ACLU intends to revisit it in 2018.


Source: The Tech Crunch

Read More