Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Google discloses security bug in its Bluetooth Titan Security Keys, offers free replacement

Posted by on May 15, 2019 in Bluetooth, computer security, cryptography, cybercrime, Google, key, Keys, mobile security, Password, phishing, Security, security token, TC, wireless | 0 comments

Google today disclosed a security bug in its Bluetooth Titan Security Key that could allow an attacker in close physical proximity to circumvent the security the key is supposed to provide. The company says the bug is due to a “misconfiguration in the Titan Security Keys’ Bluetooth pairing protocols” and that even the faulty keys still protect against phishing attacks. Still, the company is providing a free replacement key to all existing users.

The bug affects all Titan Bluetooth keys, which sell for $50 in a package that also includes a standard USB/NFC key, that have a “T1” or “T2” on the back.

To exploit the bug, an attacker would have to be within Bluetooth range (about 30 feet) and act swiftly as you press the button on the key to activate it. The attacker can then use the misconfigured protocol to connect their own device to the key before your own device connects. With that — and assuming that they already have your username and password — they could sign into your account.

Google also notes that before you can use your key, it has to be paired to your device. An attacker could also potentially exploit this bug by using their own device and masquerading it as your security key to connect to your device when you press the button on the key. By doing this, the attacker can then change their device to look like a keyboard or mouse and remote control your laptop, for example.

All of this has to happen at the exact right time, though, and the attacker must already know your credentials. A persistent attacker could make that work, though.

Google argues that this issue doesn’t affect the Titan key’s main mission, which is to guard against phishing attacks, and argues that users should continue to use the keys until they get a replacement. “It is much safer to use the affected key instead of no key at all. Security keys are the strongest protection against phishing currently available,” the company writes in today’s announcement.

The company also offers a few tips for mitigating the potential security issues here.

Some of Google’s competitors in the security key space, including Yubico, decided against using Bluetooth because of potential security issues and criticized Google for launching a Bluetooth key. “While Yubico previously initiated development of a BLE security key, and contributed to the BLE U2F standards work, we decided not to launch the product as it does not meet our standards for security, usability and durability,” Yubico founder Stina Ehrensvärd wrote when Google launched its Titan keys.


Source: The Tech Crunch

Read More

Microsoft open-sources a crucial algorithm behind its Bing Search services

Posted by on May 15, 2019 in Artificial Intelligence, Bing, Cloud, computing, Developer, Microsoft, open source software, search results, Software, windows phone, world wide web | 0 comments

Microsoft today announced that it has open-sourced a key piece of what makes its Bing search services able to quickly return search results to its users. By making this technology open, the company hopes that developers will be able to build similar experiences for their users in other domains where users search through vast data troves, including in retail, though in this age of abundant data, chances are developers will find plenty of other enterprise and consumer use cases, too.

The piece of software the company open-sourced today is a library Microsoft developed to make better use of all the data it collected and AI models it built for Bing .

“Only a few years ago, web search was simple. Users typed a few words and waded through pages of results,” the company notes in today’s announcement. “Today, those same users may instead snap a picture on a phone and drop it into a search box or use an intelligent assistant to ask a question without physically touching a device at all. They may also type a question and expect an actual reply, not a list of pages with likely answers.”

With the Space Partition Tree and Graph (SPTAG) algorithm that is at the core of the open-sourced Python library, Microsoft is able to search through billions of pieces of information in milliseconds.

Vector search itself isn’t a new idea, of course. What Microsoft has done, though, is apply this concept to working with deep learning models. First, the team takes a pre-trained model and encodes that data into vectors, where every vector represents a word or pixel. Using the new SPTAG library, it then generates a vector index. As queries come in, the deep learning model translates that text or image into a vector and the library finds the most related vectors in that index.

“With Bing search, the vectorizing effort has extended to over 150 billion pieces of data indexed by the search engine to bring improvement over traditional keyword matching,” Microsoft says. “These include single words, characters, web page snippets, full queries and other media. Once a user searches, Bing can scan the indexed vectors and deliver the best match.”

The library is now available under the MIT license and provides all of the tools to build and search these distributed vector indexes. You can find more details about how to get started with using this library — as well as application samples — here.


Source: The Tech Crunch

Read More

GitHub gets a package registry

Posted by on May 10, 2019 in computing, Developer, Git, GitHub, Java, Javascript, npm, ruby, Software, TC, version control | 0 comments

GitHub today announced the launch of a limited beta of the GitHub Package Registry, its new package management service that lets developers publish public and private packages next to their source code.

To be clear, GitHub isn’t launching a competitor to tools like npm or RubyGems. What the company is launching, however, is a service that is compatible with these tools and allows developers to find and publish their own packages, using the same GitHub interface they use for their code. The new service is currently compatible with JavaScript (npm), Java (Maven), Ruby (RubyGems), .NET (NuGet) and Docker images, with support for other languages and tools to come.

GitHub Package Registry is compatible with common package management clients, so you can publish packages with your choice of tools,” Simina Pasat, director of Product Management at GitHub, explains in today’s announcement. “If your repository is more complex, you’ll be able to publish multiple packages of different types. And, with webhooks or with GitHub Actions, you can fully customize your publishing and post-publishing workflows.”With this, businesses can then also provide their employees with a single set of credentials to manage both their code and packages — and this new feature makes it easy to create a set of approved packages, too. Users will also get download statistics and access to the entire history of the package on GitHub.

Most open-source packages already use GitHub to develop their code before they publish it to a public registry. GitHub argues that these developers can now also use the GitHub Package Registry to publish pre-release versions, for example.

Developers already often use GitHub to host their private repositories. After all, it makes sense to keep packages and code in the same place. What GitHub is doing here, to some degree, is formalize this practice and wrap a product around it.


Source: The Tech Crunch

Read More

Google starts rolling out better AMP URLs

Posted by on Apr 17, 2019 in Amp+, chrome, digital media, Google, google search, HTML, Mobile, mobile web, Online Advertising, TC, world wide web | 0 comments

Publishers don’t always love Google’s AMP pages, but readers surely appreciate their speed, and while publishers are loath to give Google more power, virtually every major site now supports this format. One AMP quirk that publisher’s definitely never liked is about to go away, though. Starting today, when you use Google Search and click on an AMP link, the browser will display the publisher’s real URLs instead of an “http//google.com/amp” link.

This move has been in the making for well over a year. Last January, the company announced that it was embarking on a multi-month effort to load AMP pages from the Google AMP cache without displaying the Google URL.

At the core of this effort was the new Web Packaging standard, which uses signed exchanges with digital signatures to let the browser trust a document as if it belongs to a publisher’s origin. By default, a browser should reject scripts in a web page that try to access data that doesn’t come from the same origin. Publishers will have to do a bit of extra work, and publish both signed and un-signed versions of their stories.

 

Quite a few publishers already do this, given that Google started alerting publishers of this change in November 2018. For now, though, only Chrome supports the core features behind this service, but other browsers will likely add support soon, too.

For publishers, this is a pretty big deal, given that their domain name is a core part of their brand identity. Using their own URL also makes it easier to get analytics, and the standard grey bar that sits on top of AMP pages and shows the site you are on now isn’t necessary anymore because the name will be in the URL bar.

To launch this new feature, Google also partnered with Cloudflare, which launched its AMP Real URL feature today. It’ll take a bit before it will roll out to all users, who can then enable it with a single click. With this, the company will automatically sign every AMP page it sends to the Google AMP cache. For the time being, that makes Cloudflare the only CDN that supports this feature, though others will surely follow.

“AMP has been a great solution to improve the performance of the internet and we were eager to work with the AMP Project to help eliminate one of AMP’s biggest issues — that it wasn’t served from a publisher’s perspective,” said Matthew Prince, co-founder and CEO of Cloudflare. “As the only provider currently enabling this new solution, our global scale will allow publishers everywhere to benefit from a faster and more brand-aware mobile experience for their content.”

 


Source: The Tech Crunch

Read More

Vizion.ai launches its managed Elasticsearch service

Posted by on Mar 28, 2019 in Amazon Web Services, api, Artificial Intelligence, Caching, cloud computing, computing, Developer, Elastic, Elasticsearch, Enterprise, ML, TC, world wide web | 0 comments

Setting up Elasticsearch, the open-source system that many companies large and small use to power their distributed search and analytics engines, isn’t the hardest thing. What is very hard, though, is to provision the right amount of resources to run the service, especially when your users’ demand comes in spikes, without overpaying for unused capacity. Vizion.ai’s new Elasticsearch Service does away with all of this by essentially offering Elasticsearch as a service and only charging its customers for the infrastructure they use.

Vizion.ai’s service automatically scales up and down as needed. It’s a managed service and delivered as a SaaS platform that can support deployments on both private and public clouds, with full API compatibility with the standard Elastic stack that typically includes tools like Kibana for visualizing data, Beats for sending data to the service and Logstash for transforming the incoming data and setting up data pipelines. Users can easily create several stacks for testing and development, too, for example.

Vizion.ai GM and VP Geoff Tudor

“When you go into the AWS Elasticsearch service, you’re going to be looking at dozens or hundreds of permutations for trying to build your own cluster,” Vision.ai’s VP and GM Geoff Tudor told me. “Which instance size? How many instances? Do I want geographical redundancy? What’s my networking? What’s my security? And if you choose wrong, then that’s going to impact the overall performance. […] We do balancing dynamically behind that infrastructure layer.” To do this, the service looks at the utilization patterns of a given user and then allocates resources to optimize for the specific use case.

What VVizion.ai hasdone here is take some of the work from its parent company Panzura, a multi-cloud storage service for enterprises that has plenty of patents around data caching, and applied it to this new Elasticsearch service.

There are obviously other companies that offer commercial Elasticsearch platforms already. Tudor acknowledges this, but argues that his company’s platform is different. With other products, he argues, you have to decide on the size of your block storage for your metadata upfront, for example, and you typically want SSDs for better performance, which can quickly get expensive. Thanks to Panzura’s IP, Vizion.ai is able to bring down the cost by caching recent data on SSDs and keeping the rest in cheaper object storage pools.

He also noted that the company is positioning the overall Vizion.ai service, with the Elasticsearch service as one of the earliest components, as a platform for running AI and ML workloads. Support for TensorFlow, PredictionIO (which plays nicely with Elasticsearch) and other tools is also in the works. “We want to make this an easy serverless ML/AI consumption in a multi-cloud fashion, where not only can you leverage the compute, but you can also have your storage of record at a very cost-effective price point.”


Source: The Tech Crunch

Read More