Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Mozilla’s free password manager, Firefox Lockbox, launches on Android

Posted by on Mar 26, 2019 in android, android apps, Apps, firefox, Mozilla, password manager, Privacy, Security, web browser, Web browsers | 0 comments

Mozilla’s free password manager designed for users of the Firefox web browser is today officially arriving on Android. The standalone app, called Firefox Lockbox, offers a simple if a bit basic way for users to access from their mobile device their logins already stored in their Firefox browser.

The app is nowhere near as developed as password managers like 1Password, Dashlane, LastPass and others as it lacks common features like the ability to add, edit or delete passwords; suggest complex passwords; or alert you to potentially compromised passwords resulting from data breaches, among other things.

However, the app is free — and if you’re already using Firefox’s browser, it’s at the very least a more secure alternative to writing down your passwords in an unprotected notepad app, for example. And you can opt to enable Lockbox as an Autofill service on Android.

But the app is really just a companion to Firefox. The passwords in Lockbox securely sync to the app from the Firefox browser — they aren’t entered by hand. For security, the app can be locked with facial recognition or a fingerprint (depending on device support). The passwords are also encrypted in a way that doesn’t allow Mozilla to read your data, it explains in a FAQ.

Firefox Lockbox is now one of several projects Mozilla developed through its now-shuttered Test Flight program. Over a few years’ time, the program had allowed the organization to trial more experimental features — some of which made their way to official products, like the recently launched file-sharing app, Firefox Send.

Others in the program — including Firefox Color⁩⁨Side View⁩⁨Firefox Notes⁩⁨Price Tracker and ⁨Email Tabs⁩ — remain available, but are no longer actively developed beyond occasional maintenance releases. Mozilla’s current focus is on its suite of “privacy-first” solutions, not its other handy utilities.

According to Mozilla, Lockbox was downloaded more than 50,000 times on iOS ahead of today’s Android launch.

The Android version is a free download on Google Play.


Source: The Tech Crunch

Read More

Mozilla launches its free, encrypted file-sharing service, Firefox Send

Posted by on Mar 12, 2019 in encryption, firefox, Mozilla, online privacy, TC, web browser | 0 comments

Firefox Send, Mozilla’s free, encrypted file-transfer service, is officially launching to the public today following its debut as a “Test Pilot” experiment back in August 2017. The service allows web users to share files up to 2.5 GB in size through the browser, while protecting them with end-to-end encryption and a link that automatically expires to keep the shared files private.

When Mozilla first began testing the web-based Send tool, file shares were limited to 1 GB. Today, that remains the limitation until users sign up for a free Firefox account. They can then opt to share files up to 2.5 GB.

The system offers an alternative to email, where larger file attachments are more of an issue, as well as cloud storage sites, like Google Drive and Dropbox, which can be time-consuming when all you need to do is share a single file one time — not store the file, edit it or collaborate with others.

To use the service, the sender visits the Send website, uploads the files and sets an expiration period — a design choice seemingly inspired by Snapchat and its concepts around ephemerality. You also can opt to have the files protected with a password before sending.

Firefox Send then offers a link you can give to the recipient however you see fit, which they simply click to start the download. They will not need a Firefox account of their own to access the files, Mozilla notes.

The organization suggests that the new tool could be used for moving files around the web that you otherwise had worried about sharing — like financial information, for example.

However, security experts would still caution the use of any tool for sharing highly sensitive files through an online service, as there’s always the potential that something could go wrong with regard to unauthorized access.

That said, Mozilla is one of the more trustworthy organizations when it comes to things like this. Send, it says, is “Private By Design,” which means all files are protected. It designed Firefox accounts so users never send Firefox their passphrase, for example. It also says it stands by its mission to “handle your data privately and securely,” writes Mozilla in an announcement today. That speaks more to the organization’s ethos — that it believes privacy is a fundamental right.

Still, security experts need to test systems first-hand before signing off on their trustworthiness. As Send is only this morning debuting in its official, public iteration, that hasn’t yet been done.

The new tool could help Firefox attract a new audience to its web tools and services. Firefox was once a top web browser and household name, but its market share declined over the years as the built-in options from larger tech companies took hold — like IE, Safari and Chrome. However, with people’s increasing suspicion of big tech, as well as far-too-frequent data breaches, and the overall decline in online privacy, it’s the best time for Firefox to try to stage a comeback. Whether it will actually be able to deliver is another matter.

Firefox Send is launching today on the web at send.firefox.com and will be available as an Android app in beta later this week.


Source: The Tech Crunch

Read More

Massive mortgage and loan data leak gets worse as original documents also exposed

Posted by on Jan 24, 2019 in Amazon-S3, cloud storage, computer security, data breach, data security, database, email, Finance, Government, New York, ocr, Prevention, Privacy, Security, texas, United States, web browser | 1 comment

Remember that massive data leak of mortgage and loan data we reported on Wednesday?

In case you missed it, millions of documents were found leaking after an exposed Elasticsearch server was found without a password. The data contained highly sensitive financial data on tens of thousands of individuals who took out loans or mortgages over the past decade with U.S. financial institutions. The documents were converted using a technology called OCR from their original paper documents to a computer readable format and stored in the database, but they weren’t easy to read. That said, it was possible to discern names, addresses, birth dates, Social Security numbers and other private financial data by anyone who knew where to find the server.

Independent security researcher Bob Diachenko and TechCrunch traced the source of the leaking database to a Texas-based data and analytics company, Ascension. When reached, the company said that one of its vendors, OpticsML, a New York-based document management startup, had mishandled the data and was to blame for the data leak.

It turns out that data was exposed again — but this time, it was the original documents.

Diachenko found the second trove of data in a separate exposed Amazon S3 storage server, which too was not protected with a password. Anyone who went to an easy-to-guess web address in their web browser could have accessed the storage server and see — and download — the files stored inside.

In a note to TechCrunch, Diachenko said he was “very surprised” to find the server in the first place, let alone open and accessible. Because Amazon storage servers are private by default and aren’t accessible to the web, someone would have made a conscious decision to set its permissions to public.

The bucket contained 21 files containing 23,000 pages of PDF documents stitched together — or about 1.3 gigabytes in size. Diachenko said that portions of the data in the exposed Elasticsearch database on Wednesday matched data found in the Amazon S3 bucket, confirming that some or all of the data is the same as what was previously discovered. Like in Wednesday’s report, the server contained documents from banks and financial institutions across the U.S., including loans and mortgage agreements. We also found documents from U.S. Department of Housing and Urban Development, as well as W-2 tax forms, loan repayment schedules, and other sensitive financial information.

Two of the files — redacted — found on the exposed storage server. (Image: TechCrunch)

Many of the files also contained names, addresses, phone numbers, and Social Security numbers, and more.

When we tried to reach OpticsML on Wednesday, its website had been pulled offline and the listed phone number was disconnected. After scouring through old cached version of the site, we found an email address.

TechCrunch emailed chief executive Sean Lanning, and the bucket was secured within the hour.

Lanning acknowledged our email but did not comment. Instead, OpticsML chief technology officer John Brozena confirmed the breach in a separate email, but declined to answer several questions about the exposed data — including how long the bucket was open and why it was set to public.

“We are working with the appropriate authorities and a forensic team to analyze the full extent of the situation regarding the exposed Elasticsearch server,” said Brozena. “As part of this investigation we learned that 21 documents used for testing were made identifiable by the previously discussed Elasticsearch leak. These documents were taken offline promptly.”

He added that OpticsML is “working to notify all affected parties” when asked about informing customers and state regulators, as per state data breach notification laws.

But Diachenko said there was no telling how many times the bucket might have been accessed before it was discovered.

“I would assume that after such publicity like these guys had, first thing you would do is to check if your cloud storage is down or, at least, password-protected,” he said.


Source: The Tech Crunch

Read More

How a small French privacy ruling could remake adtech for good

Posted by on Nov 20, 2018 in Adtech, Advertising Tech, data controller, data protection, digital media, DuckDuckGo, Europe, European Union, Facebook, General Data Protection Regulation, Google, iab europe, Ireland, Lawsuit, online ads, Online Advertising, Open Rights Group, Privacy, programmatic advertising, Real-time bidding, rtb, Security, Social, TC, United Kingdom, web browser | 0 comments

A ruling in late October against a little-known French adtech firm that popped up on the national data watchdog’s website earlier this month is causing ripples of excitement to run through privacy watchers in Europe who believe it signals the beginning of the end for creepy online ads.

The excitement is palpable.

Impressively so, given the dry CNIL decision against mobile “demand side platform” Vectaury was only published in the regulator’s native dense French legalese.

Digital advertising trade press AdExchanger picked up on the decision yesterday.

Here’s the killer paragraph from CNIL’s ruling — translated into “rough English” by my TC colleague Romain Dillet:

The requirement based on the article 7 above-mentioned isn’t fulfilled with a contractual clause that guarantees validly collected initial consent. The company VECTAURY should be able to show, for all data that it is processing, the validity of the expressed consent.

In plainer English, this is being interpreted by data experts as the regulator stating that consent to processing personal data cannot be gained through a framework arrangement which bundles a number of uses behind a single “I agree” button that, when clicked, passes consent to partners via a contractual relationship.

CNIL’s decision suggests that bundling consent to partner processing in a contract is not, in and of itself, valid consent under the European Union’s General Data Protection Regulation (GDPR) framework.

Consent under this regime must be specific, informed and freely given. It says as much in the text of GDPR.

But now, on top of that, the CNIL’s ruling suggests a data controller has to be able to demonstrate the validity of the consent — so cannot simply tuck consent inside a contractual “carpet-bag” that gets passed around to everyone else in their chain as soon as the user clicks “I agree.”

This is important, because many widely used digital advertising consent frameworks rolled out to websites in Europe this year — in claimed compliance with GDPR — are using a contractual route to obtain consent, and bundling partner processing behind often hideously labyrinthine consent flows.

The experience for web users in the EU right now is not great. But it could be leading to a much better internet down the road.

Where’s the consent for partner processing?

Even on a surface level the current crop of confusing consent mazes look problematic.

But the CNIL ruling suggests there are deeper and more structural problems lurking and embedded within. And as regulators dig in and start to unpick adtech contradictions it could force a change of mindset across the entire ecosystem.

As ever, when talking about consent and online ads the overarching point to remember is that no consumer given a genuine full disclosure about what’s being done with their personal data in the name of behavioral advertising would freely consent to personal details being hawked and traded across the web just so a bunch of third parties can bag a profit share.

This is why, despite GDPR being in force (since May 25), there are still so many tortuously confusing “consent flows” in play.

The longstanding online T&Cs trick of obfuscating and socially engineering consent remains an unfortunately standard playbook. But, less than six months into GDPR we’re still very much in a “phoney war” phase. More regulatory rulings are needed to lay down the rules by actually enforcing the law.

And CNIL’s recent activity suggests more to come.

In the Vectaury case, the mobile ad firm used a template framework for its consent flow that had been created by industry trade association and standards body, IAB Europe.

It did make some of its own choices, using its own wording on an initial consent screen and pre-ticking the purposes (another big GDPR no-no). But the bundling of data purposes behind a single opt in/out button is the core IAB Europe design. So CNIL’s ruling suggests there could be trouble ahead for other users of the template.

IAB Europe’s CEO, Townsend Feehan, told us it’s working on a statement reaction to the CNIL decision, but suggested Vectaury fell foul of the regulator because it may not have implemented the “Transparency & Consent Framework-compliant” consent management platform (CMP) framework — as it’s tortuously known — correctly.

So either “the ‘CMP’ that they implemented did not align to our Policies, or choices they could have made in the implementation of their CMP that would have facilitated compliance with the GDPR were not made,” she suggested to us via email.

Though that sidesteps the contractual crux point that’s really exciting privacy advocates — and making them point to the CNIL as having slammed the first of many unbolted doors.

The French watchdog has made a handful of other decisions in recent months, also involving geolocation-harvesting adtech firms, and also for processing data without consent.

So regulatory activity on the GDPR+adtech front has been ticking up.

Its decision to publish these rulings suggests it has wider concerns about the scale and privacy risks of current programmatic ad practices in the mobile space than can be attached to any single player.

So the suggestion is that just publishing the rulings looks intended to put the industry on notice…

Meanwhile, adtech giant Google has also made itself unpopular with publisher “partners” over its approach to GDPR by forcing them to collect consent on its behalf. And in May a group of European and international publishers complained that Google was imposing unfair terms on them.

The CNIL decision could sharpen that complaint too — raising questions over whether audits of publishers that Google said it would carry out will be enough for the arrangement to pass regulatory muster.

For a demand-side platform like Vectaury, which was acting on behalf of more than 32,000 partner mobile apps with user eyeballs to trade for ad cash, achieving GDPR compliance would mean either asking users for genuine consent and/or having a very large number of contracts on which it’s doing actual due diligence.

Yet Google is orders of magnitude more massive, of course.

The Vectaury file gives us a fascinating little glimpse into adtech “business as usual.” Business which also wasn’t, in the regulator’s view, legal.

The firm was harvesting a bunch of personal data (including people’s location and device IDs) on its partners’ mobile users via an SDK embedded in their apps, and receiving bids for these users’ eyeballs via another standard piece of the programmatic advertising pipe — ad exchanges and supply side platforms — which also get passed personal data so they can broadcast it widely via the online ad world’s real-time bidding (RTB) system. That’s to solicit potential advertisers’ bids for the attention of the individual app user… The wider the personal data gets spread, the more potential ad bids.

That scale is how programmatic works. It also looks horrible from a GDPR “privacy by design and default” standpoint.

The sprawling process of programmatic explains the very long list of “partners” nested non-transparently behind the average publisher’s online consent flow. The industry, as it is shaped now, literally trades on personal data.

So if the consent rug it’s been squatting on for years suddenly gets ripped out from underneath it, there would need to be radical reshaping of ad-targeting practices to avoid trampling on EU citizens’ fundamental right.

GDPR’s really big change was supersized fines. So ignoring the law would get very expensive.

Oh hai real-time bidding!

In Vectaury’s case, CNIL discovered the company was holding the personal data of a staggering 67.6 million people when it conducted an on-site inspection of the company in April 2018.

That already sounds like A LOT of data for a small mobile adtech player. Yet it might actually have been a tiny fraction of the personal data the company was routinely handling — given that Vectaury’s own website claims 70 percent of collected data is not stored.

In the decision there was no fine, but CNIL ordered the firm to delete all data it had not already deleted (having judged collection illegal given consent was not valid); and to stop processing data without consent.

But given the personal-data-based hinge of current-gen programmatic adtech, that essentially looks like an order to go out of business. (Or at least out of that business.)

And now we come to another interesting GDPR adtech complaint that’s not yet been ruled on by the two DPAs in question (Ireland and the U.K.) — but which looks even more compelling in light of the CNIL Vectaury decision because it picks at the adtech scab even more daringly.

Filed last month with the Irish Data Protection Commission and the U.K.’s ICO, this adtech complaint — the work of three individuals, Johnny Ryan of private web browser Brave; Jim Killock, exec director of digital and civil rights group, the Open Rights Group; and University College London data protection researcher, Michael Veale — targets the RTB system itself.

Here’s how Ryan, Killock and Veale summarized the complaint when they announced it last month:

Every time a person visits a website and is shown a “behavioural” ad on a website, intimate personal data that describes each visitor, and what they are watching online, is broadcast to tens or hundreds of companies. Advertising technology companies broadcast these data widely in order to solicit potential advertisers’ bids for the attention of the specific individual visiting the website.

A data breach occurs because this broadcast, known as an “bid request” in the online industry, fails to protect these intimate data against unauthorized access. Under the GDPR this is unlawful.

The GDPR, Article 5, paragraph 1, point f, requires that personal data be “processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss.” If you can not protect data in this way, then the GDPR says you can not process the data.

Ryan tells TechCrunch that the crux of the complaint is not related to the legal basis of the data sharing but rather focuses on the processing itself — arguing “that it itself is not adequately secure… that they’re aren’t adequate controls.”

Though he says there’s a consent element too, and so sees the CNIL ruling bolstering the RTB complaint. (On that keep in mind that CNIL judged Vectaury should not have been holding the RTB data of 67.6M people because it did not have valid consent.)

“We do pick up on the issue of consent in the complaint. And this particular CNIL decision has a bearing on both of those issues,” he argues. “It demonstrates in a concrete example that involved investigators going into physical premises and checking the machines — it demonstrates that even one small company was receiving tens of millions of people’s personal data in this illegal way.

“So the breach is very real. And it demonstrates that it’s not unreasonable to suggest that the consent is meaningless in any case.”

Reaching for a handy visual explainer, he continues: “If I leave a briefcase full of personal data in the middle of Charing Cross station at 11am and it’s really busy, that’s a breach. That would have been a breach back in the 1970s. If my business model is to drive up to Charing Cross station with a dump-truck and dump briefcases onto the street at 11am in the full knowledge that my business partners will all scramble around and try and grab them — and then to turn up at 11.01am and do the same thing. And then 11.02am. And every microsecond in between. That’s still a fucking data breach!

“It doesn’t matter if you think you’ve consent or anything else. You have to [comply with GDPR Article 5, paragraph 1, point f] in order to even be able to ask for a legal basis. There are plenty of other problems but that’s the biggest one that we highlighted. That’s our reason for saying this is a breach.”

“Now what CNIL has said is this company, Vectaury, was processing personal data that it did not lawfully have — and it got them through RTB,” he adds, spelling the point out. “So back to the GDPR — GDPR is saying you can’t process data in a way that doesn’t ensure protection against unauthorized or unlawful processing.”

In other words, RTB as a funnel for processing personal data looks to be on inherently shaky ground because it’s inherently putting all this personal data out there and at risk…

What’s bad for data brokers…

In another loop back, Ryan says the regulators have been in touch since their RTB complaint was filed to invite them to submit more information.

He says the CNIL Vectaury decision will be incorporated into further submissions, predicting: “This is going to be bounced around multiple regulators.”

The trio is keen to generate extra bounce by working with NGOs to enlist other individuals to file similar complaints in other EU Member States — to make the action a pan-European push, just like programmatic advertising itself.

“We now have the opportunity to connect our complaint with the excellent work that Privacy International has done, showing where these data end up, and with the excellent work that CNIL has done showing exactly how this actually applies. And this decision from CNIL takes, essentially my report that went with our complaint and shows exactly how that applies in the real world,” he continues.

“I was writing in the abstract — CNIL has now made a decision that is very much not in the abstract, it’s in the real world affecting millions of people… This will be a European-wide complaint.”

But what does programmatic advertising that doesn’t entail trading on people’s grubbily obtained personal data actually look like? If there were no personal data in bid requests Ryan believes quite a few things would happen. Such as, for e.g. the demise of clickbait.

“There would be no way to take your TechCrunch audience and buy it cheaper on some shitty website. There would be no more of that arbitrage stuff. Clickbait would die! All that nasty stuff would go away,” he suggests.

(And, well, full disclosure: We are TechCrunch — so we can confirm that does sound really great to us!)

He also reckons ad values would go up. Which would also be good news for publishers. (“Because the only place you could buy the TechCrunch audience would be on TechCrunch — that’s a really big deal!”)

He even suggests ad fraud might shrink because the incentives would shift. Or at least they could so long as the “worthy” publishers that are able to survive in the new ad world order don’t end up being complicit with bot fraud anyway.

As it stands, publishers are being screwed between the twin plates of the dominant adtech platforms (Google and Facebook), where they are having to give up a majority of their ad revenue — leaving the media industry with a shrinking slice of ad revenues (that can be as lean as ~30 percent).

That then has a knock on impact on funding newsrooms and quality journalism. And, well, on the wider web too — given all the weird incentives that operate in today’s big tech social media platform-dominated internet.

While a privacy-sucking programmatic monster is something only shadowy background data brokers that lack any meaningful relationships with the people whose data they’re feeding the beast could truly love.

And, well, Google and Facebook.

Ryan’s view is that the reason an adtech duopoly exists boils down to the “audience leakage” being enabled by RTB. Leakage which, in his view, also isn’t compliant with EU privacy laws.

He reckons the fix for this problem is equally simple: Keep doing RTB but without any personal data.

A real-time ad bidding system that’s been stripped of personal data does not mean no targeted ads. It could still support ad targeting based on real-time factors such as an approximate location (say to a city region) and/or generic and aggregated data.

Crucially it would not use unique identifiers that enable linking ad bids to a individual’s entire digital footprint and bid request history — as is the case now. Which essentially translates into: RIP privacy rights.

Ryan argues that RTB without personal data would still offer plenty of “value” to advertisers — who could still reach people based on general locations and via real-time interests. (It’s a model that sounds much like what privacy search engine DuckDuckGo is doing, and also been growing.)

The really big problem, though, is turning the behavioral ad tanker around. Given that the ecosystem is embedded, even as the duopoly milks it.

That’s also why Ryan is so hopeful now, though, having parsed the CNIL decision.

His reading is regulators will play a decisive role in pushing the ad industry’s trigger — and force through much-needed change in their targeting behavior.

“Unless the entire industry moves together, no one can be the first to remove personal data from bid requests but if the regulators step in in a big way… and say you’re all going to go out of business if you keep putting personal data into bid requests then everyone will come together — like the music industry was forced to eventually, under Steve Jobs,” he argues. “Everyone can together decide on a new short term disadvantageous but long term highly advantageous change.”

Of course such a radical reshaping is not going to happen overnight. Regulatory triggers tend to be slow motion unfoldings at the best of times. You also have to factor in the inexorable legal challenges.

But look closely and you’ll see both momentum massing behind privacy — and regulatory writing on the wall.

“Are we going to see programmatic forced to be non-personal and therefore better for every single citizen of the world (except, say, if they work for a data broker),” adds Ryan, posing his own concluding question. “Will that massive change, which will help society and the web… will that change happen before Christmas? No. But it’s worth working on. And it’s going to take some time.

“It could be two years from now that we have the finality. But a finality there will be. Detroit was only able to fight against regulation for so long. It does come.”

Who’d have though “taking back control” could ever sound so good?


Source: The Tech Crunch

Read More

Ransomware technique uses your real passwords to trick you

Posted by on Jul 12, 2018 in Apps, Google, Identity Theft, Messenger, Password, Prevention, ransomware, Safety, Security, security breaches, Startups, TC, web browser | 0 comments

A few folks have reported a new ransomware technique that preys upon corporate inability to keep passwords safe. The notes – which are usually aimed at instilling fear – are simple: the hacker says “I know that your password is X. Give me a bitcoin and I won’t blackmail you.”

Programmer Can Duruk reported getting the email today.

The email reads:

I’m aware that X is your password.

You don’t know me and you’re thinking why you received this e mail, right?

Well, I actually placed a malware on the porn website and guess what, you visited this web site to have fun (you know what I mean). While you were watching the video, your web browser acted as a RDP (Remote Desktop) and a keylogger which provided me access to your display screen and webcam. Right after that, my software gathered all your contacts from your Messenger, Facebook account, and email account.

What exactly did I do?

I made a split-screen video. First part recorded the video you were viewing (you’ve got a fine taste haha), and next part recorded your webcam (Yep! It’s you doing nasty things!).

What should you do?

Well, I believe, $1400 is a fair price for our little secret. You’ll make the payment via Bitcoin to the below address (if you don’t know this, search “how to buy bitcoin” in Google) .

BTC Address: 1Dvd7Wb72JBTbAcfTrxSJCZZuf4tsT8V72
(It is cAsE sensitive, so copy and paste it)

Important:

You have 24 hours in order to make the payment. (I have an unique pixel within this email message, and right now I know that you have read this email). If I don’t get the payment, I will send your video to all of your contacts including relatives, coworkers, and so forth. Nonetheless, if I do get paid, I will erase the video immidiately. If you want evidence, reply with “Yes!” and I will send your video recording to your 5 friends. This is a non-negotiable offer, so don’t waste my time and yours by replying to this email.

To be clear there is very little possibility that anyone has video of you cranking it unless, of course, you video yourself cranking it. Further, this is almost always a scam. That said, the fact that the hackers are able to supply your real passwords – most probably gleaned from the multiple corporate break-ins that have happened over the past few years – is a clever change to the traditional cyber-blackmail methodology.

Luckily, the hackers don’t have current passwords.

“However, all three recipients said the password was close to ten years old, and that none of the passwords cited in the sextortion email they received had been used anytime on their current computers,” wrote researcher Brian Krebs. In short, the password files the hackers have are very old and outdated.

To keep yourself safe, however, cover your webcam when not in use and change your passwords regularly. While difficult, there is nothing else that can keep you safer than you already are if you use two-factor authentication and secure logins.


Source: The Tech Crunch

Read More