Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Reality Check: The marvel of computer vision technology in today’s camera-based AR systems

Posted by on May 15, 2019 in Animation, AR, ar/vr, Artificial Intelligence, Augmented Reality, Column, Computer Vision, computing, Developer, digital media, Gaming, gif, Global Positioning System, gps, mobile phones, neural network, starbucks, TC, virtual reality, VR | 0 comments

British science fiction writer, Sir Arther C. Clark, once said, “Any sufficiently advanced technology is indistinguishable from magic.”

Augmented reality has the potential to instill awe and wonder in us just as magic would. For the very first time in the history of computing, we now have the ability to blur the line between the physical world and the virtual world. AR promises to bring forth the dawn of a new creative economy, where digital media can be brought to life and given the ability to interact with the real world.

AR experiences can seem magical but what exactly is happening behind the curtain? To answer this, we must look at the three basic foundations of a camera-based AR system like our smartphone.

  1. How do computers know where it is in the world? (Localization + Mapping)
  2. How do computers understand what the world looks like? (Geometry)
  3. How do computers understand the world as we do? (Semantics)

Part 1: How do computers know where it is in the world? (Localization)

Mars Rover Curiosity taking a selfie on Mars. Source: https://www.nasa.gov/jpl/msl/pia19808/looking-up-at-mars-rover-curiosity-in-buckskin-selfie/

When NASA scientists put the rover onto Mars, they needed a way for the robot to navigate itself on a different planet without the use of a global positioning system (GPS). They came up with a technique called Visual Inertial Odometry (VIO) to track the rover’s movement over time without GPS. This is the same technique that our smartphones use to track their spatial position and orientation.

A VIO system is made out of two parts.


Source: The Tech Crunch

Read More

VMware acquires Bitnami to deliver packaged applications anywhere

Posted by on May 15, 2019 in bitnami, Cloud, Developer, Enterprise, Fundings & Exits, M&A, Mergers and Acquisitions, vmware | 0 comments

VMware announced today that it’s acquiring Bitnami, the package application company that was a member of the Y Combinator Winter 2013 class. The companies didn’t share the purchase price.

With Bitnami, the company can now deliver more than 130 popular software packages in a variety of formats, such as Docker containers or virtual machine, an approach that should be attractive for VMware as it makes its transformation to be more of a cloud services company.

“Upon close, Bitnami will enable our customers to easily deploy application packages on any cloud — public or hybrid — and in the most optimal format — virtual machine (VM), containers and Kubernetes helm charts. Further, Bitnami will be able to augment our existing efforts to deliver a curated marketplace to VMware customers that offers a rich set of applications and development environments in addition to infrastructure software,” the company wrote in a blog post announcing the deal.

Per usual, Bitnami’s founders see the exit through the prism of being able to build out the platform faster with the help of a much larger company. “Joining forces with VMware means that we will be able to both double-down on the breadth and depth of our current offering and bring Bitnami to even more clouds as well as accelerating our push into the enterprise,” the founders wrote in a blog post on the company website.

Holger Mueller, an analyst at Constellation Research says the deal fits well with VMware’s overall strategy. “Enterprises want easy, fast ways to deploy packaged applications and providers like Bitnami take the complexity out of this process. So this is a key investment for VMware that wants to position itselfy not only as the trusted vendor for virtualizaton across the hybrid cloud, but also as a trusted application delivery vendor,” he said.

The company has raised a modest $1.1 million since its founding in 2011 and says that it has been profitable since early days when it took the funding. In the blog post, the company states that nothing will change for customers from their perspective.

“In a way, nothing is changing. We will continue to develop and maintain our application catalog across all the platforms we support and even expand to additional ones. Additionally, if you are a company using Bitnami in production, a lot of new opportunities just opened up.”

Time will tell whether that is the case, but it is likely that Bitnami will be able to expand its offerings as part of a larger organization like VMware. The deal is expected to close by the end of this quarter (which is fiscal Q2 2020 for VMware).

VMware is a member of the Dell federation of products and came over as part of the massive $67 billion EMC deal in 2016. The company operates independently, is sold as a separate company on the stock market and makes its own acquisitions.


Source: The Tech Crunch

Read More

Microsoft open-sources a crucial algorithm behind its Bing Search services

Posted by on May 15, 2019 in Artificial Intelligence, Bing, Cloud, computing, Developer, Microsoft, open source software, search results, Software, windows phone, world wide web | 0 comments

Microsoft today announced that it has open-sourced a key piece of what makes its Bing search services able to quickly return search results to its users. By making this technology open, the company hopes that developers will be able to build similar experiences for their users in other domains where users search through vast data troves, including in retail, though in this age of abundant data, chances are developers will find plenty of other enterprise and consumer use cases, too.

The piece of software the company open-sourced today is a library Microsoft developed to make better use of all the data it collected and AI models it built for Bing .

“Only a few years ago, web search was simple. Users typed a few words and waded through pages of results,” the company notes in today’s announcement. “Today, those same users may instead snap a picture on a phone and drop it into a search box or use an intelligent assistant to ask a question without physically touching a device at all. They may also type a question and expect an actual reply, not a list of pages with likely answers.”

With the Space Partition Tree and Graph (SPTAG) algorithm that is at the core of the open-sourced Python library, Microsoft is able to search through billions of pieces of information in milliseconds.

Vector search itself isn’t a new idea, of course. What Microsoft has done, though, is apply this concept to working with deep learning models. First, the team takes a pre-trained model and encodes that data into vectors, where every vector represents a word or pixel. Using the new SPTAG library, it then generates a vector index. As queries come in, the deep learning model translates that text or image into a vector and the library finds the most related vectors in that index.

“With Bing search, the vectorizing effort has extended to over 150 billion pieces of data indexed by the search engine to bring improvement over traditional keyword matching,” Microsoft says. “These include single words, characters, web page snippets, full queries and other media. Once a user searches, Bing can scan the indexed vectors and deliver the best match.”

The library is now available under the MIT license and provides all of the tools to build and search these distributed vector indexes. You can find more details about how to get started with using this library — as well as application samples — here.


Source: The Tech Crunch

Read More

Solo.io wants to bring order to service meshes with centralized management hub

Posted by on May 15, 2019 in Cloud, Developer, Enterprise, idit levine, microservices, open source, service mesh, solo.io, Startups, TC | 0 comments

As containers and microservices have proliferated, a new kind of tool called the service mesh has developed to help manage and understand interactions between services. While Kubernetes has emerged as the clear container orchestration tool of choice, there is much less certainty in the service mesh market. Solo.io today announced a new open-source tool called Service Mesh Hub, designed to help companies manage multiple service meshes in a single interface.

It is early days for the service mesh concept, but there are already multiple offerings, including Istio, Linkerd (pronounced Linker-Dee) and Envoy. While the market sorts itself it out, it requires a new set of tools, a management layer, so that developers and operations can monitor and understand what’s happening inside the various service meshes they are running.

Idit Levine, founder and CEO at Solo, says she formed the company because she saw an opportunity to develop a set of tooling for a nascent market. Since founding the company in 2017, it has developed several open-source tools to fill that service mesh tool vacuum.

Levine says that she recognized that companies would be using multiple service meshes for multiple situations and that not every company would have the technical capabilities to manage this. That is where the idea for the Service Mesh Hub was born.

It’s a centralized place for companies to add the different service mesh tools they are using, understand the interactions happening within the mesh and add extensions to each one from a kind of extension app store. Solo wants to make adding these tools a simple matter of pointing and clicking. While it obviously still requires a certain level of knowledge about how these tools work, it removes some of the complexity around managing them.

Solo.io Service Mesh Hub

Solo.io Service Mesh Hub (Screenshot: Solo.io)

“The reason we created this is because we believe service mesh is something big, and we want people to use it, and we feel it’s hard to adopt right now. We believe by creating that kind of framework or platform, it will make it easier for people to actually use it,” Levine told TechCrunch.

The vision is that eventually companies will be able to add extensions to the store for free, or even at some point for a fee, and it is through these paid extensions that the company will be able to make money. She recognized that some companies will be creating extensions for internal use only, and in those cases, they can add them to the hub and mark them as private and only that company can see them.

For every abstraction it seems, there is a new set of problems to solve. The service mesh is a response to the problem of managing multiple services. It solves three key issues, according to Levine. It allows a company to route the microservices, have visibility into them to see logs and metrics of the mesh and to provide security to manage which services can talk to each other.

Levine’s company is a response to the issues that have developed around understanding and managing the service meshes themselves. She says she doesn’t worry about a big company coming in and undermining her mission because she says that they are too focused on their own tools to create a set of uber-management tools like these (but that doesn’t mean the company wouldn’t be an attractive acquisition target).

So far, the company has taken more than $13 million in funding, according to Crunchbase data.


Source: The Tech Crunch

Read More

India’s most popular services are becoming super apps

Posted by on May 11, 2019 in Apps, Asia, China, Cloud, Developer, Facebook, Finance, Flipkart, Food, Foodpanda, Gaana, Gaming, grab, haptik, hike, India, MakeMyTrip, Media, Microsoft, microsoft garage, Mobile, Mukesh Ambani, mx player, payments, Paytm, paytm mall, reliance jio, saavn, SnapDeal, Social, Startups, Tapzo, Tencent, Times Internet, Transportation, Truecaller, Uber, Vijay Shekhar Sharma, WeChat | 0 comments

Truecaller, an app that helps users screen strangers and robocallers, will soon allow users in India, its largest market, to borrow up to a few hundred dollars.

The crediting option will be the fourth feature the nine-year-old app adds to its service in the last two years. So far it has added to the service the ability to text, record phone calls and mobile payment features, some of which are only available to users in India. Of the 140 million daily active users of Truecaller, 100 million live in India.

The story of the ever-growing ambition of Truecaller illustrates an interesting phase in India’s internet market that is seeing a number of companies mold their single-functioning app into multi-functioning so-called super apps.

Inspired by China

This may sound familiar. Truecaller and others are trying to replicate Tencent’s playbook. The Chinese tech giant’s WeChat, an app that began life as a messaging service, has become a one-stop solution for a range of features — gaming, payments, social commerce and publishing platform — in recent years.

WeChat has become such a dominant player in the Chinese internet ecosystem that it is effectively serving as an operating system and getting away with it. The service maintains its own app store that hosts mini apps and lets users tip authors. This has put it at odds with Apple, though the iPhone-maker has little choice but to make peace with it.

For all its dominance in China, WeChat has struggled to gain traction in India and elsewhere. But its model today is prominently on display in other markets. Grab and Go-Jek in Southeast Asian markets are best known for their ride-hailing services, but have begun to offer a range of other features, including food delivery, entertainment, digital payments, financial services and healthcare.

The proliferation of low-cost smartphones and mobile data in India, thanks in part to Google and Facebook, has helped tens of millions of Indians come online in recent years, with mobile the dominant platform. The number of internet users has already exceeded 500 million in India, up from some 350 million in mid-2015. According to some estimates, India may have north of 625 million users by year-end.

This has fueled the global image of India, which is both the fastest growing internet and smartphone market. Naturally, local apps in India, and those from international firms that operate here, are beginning to replicate WeChat’s model.

Founder and chief executive officer (CEO) of Paytm Vijay Shekhar Sharma speaks during the launch of Paytm payments Bank at a function in New Delhi on November 28, 2017 (AFP PHOTO / SAJJAD HUSSAIN)

Leading that pack is Paytm, the popular homegrown mobile wallet service that’s valued at $18 billion and has been heavily backed by Alibaba, the e-commerce giant that rivals Tencent and crucially missed the mobile messaging wave in China.

Commanding attention

In recent years, the Paytm app has taken a leaf from China with additions that include the ability to text merchants; book movie, flight and train tickets; and buy shoes, books and just about anything from its e-commerce arm Paytm Mall . It also has added a number of mini games to the app. The company said earlier this month that more than 30 million users are engaging with its games.

Why bother with diversifying your app’s offering? Well, for Vijay Shekhar Sharma, founder and CEO of Paytm, the question is why shouldn’t you? If your app serves a certain number of transactions (or engagements) in a day, you have a good shot at disrupting many businesses that generate fewer transactions, he told TechCrunch in an interview.

At the end of the day, companies want to garner as much attention of a user as they can, said Jayanth Kolla, founder and partner of research and advisory firm Convergence Catalyst.

“This is similar to how cable networks such as Fox and Star have built various channels with a wide range of programming to create enough hooks for users to stick around,” Kolla said.

“The agenda for these apps is to hold people’s attention and monopolize a user’s activities on their mobile devices,” he added, explaining that higher engagement in an app translates to higher revenue from advertising.

Paytm’s Sharma agrees. “Payment is the moat. You can offer a range of things including content, entertainment, lifestyle, commerce and financial services around it,” he told TechCrunch. “Now that’s a business model… payment itself can’t make you money.”

Big companies follow suit

Other businesses have taken note. Flipkart -owned payment app PhonePe, which claims to have 150 million active users, today hosts a number of mini apps. Some of those include services for ride-hailing service Ola, hotel booking service Oyo and travel booking service MakeMyTrip.

Paytm (the first two images from left) and PhonePe offer a range of services that are integrated into their payments apps

What works for PhonePe is that its core business — payments — has amassed enough users, Himanshu Gupta, former associate director of marketing and growth for WeChat in India, told TechCrunch. He added that unlike e-commerce giant Snapdeal, which attempted to offer similar offerings back in the day, PhonePe has tighter integration with other services, and is built using modern architecture that gives users almost native app experiences inside mini apps.

When you talk about strategy for Flipkart, the homegrown e-commerce giant acquired by Walmart last year for a cool $16 billion, chances are arch rival Amazon is also hatching similar plans, and that’s indeed the case for super apps.

In India, Amazon offers its customers a range of payment features such as the ability to pay phone bills and cable subscription through its Amazon Pay service. The company last year acquired Indian startup Tapzo, an app that offers integration with popular services such as Uber, Ola, Swiggy and Zomato, to boost Pay’s business in the nation.

Another U.S. giant, Microsoft, is also aboard the super train. The Redmond-based company has added a slew of new features to SMS Organizer, an app born out of its Microsoft Garage initiative in India. What began as a texting app that can screen spam messages and help users keep track of important SMSs recently partnered with education board CBSE in India to deliver exam results of 10th and 12th grade students.

This year, the SMS Organizer app added an option to track live train schedules through a partnership with Indian Railways, and there’s support for speech-to-text. It also offers personalized discount coupons from a range of companies, giving users an incentive to check the app more often.

Like in other markets, Google and Facebook hold a dominant position in India. More than 95% of smartphones sold in India run the Android operating system. There is no viable local — or otherwise — alternative to Search, Gmail and YouTube, which counts India as its fastest growing market. But Google hasn’t necessarily made any push to significantly expand the scope of any of its offerings in India.

India is the biggest market for WhatsApp, and Facebook’s marquee app too has more than 250 million users in the nation. WhatsApp launched a pilot payments program in India in early 2018, but is yet to get clearance from the government for a nationwide rollout. (It isn’t happening for at least another two months, a person familiar with the matter said.) In the meanwhile, Facebook appears to be hatching a WeChatization of Messenger, albeit that app is not so big in India.

Ride-hailing service Ola too, like Grab and Go-Jek, plans to add financial services such as credit to the platform this year, a source familiar with the company’s plans told TechCrunch.

“We have an abundance of data about our users. We know how much money they spend on rides, how often they frequent the city and how often they order from restaurants. It makes perfect sense to give them these valued-added features,” the person said. Ola has already branched out of transport after it acquired food delivery startup Foodpanda in late 2017, but it hasn’t yet made major waves in financial services despite giving its Ola Money service its own dedicated app.

The company positioned Ola Money as a super app, expanded its features through acquisition and tie ups with other players and offered discounts and cashbacks. But it remains behind Paytm, PhonePe and Google Pay, all of which are also offering discounts to customers.

Integrated entertainment

Super apps indeed come in all shapes and sizes, beyond core services like payment and transportation — the strategy is showing up in apps and services that entertain India’s internet population.

MX Player, a video playback app with more than 175 million users in India that was acquired by Times Internet for some $140 million last year, has big ambitions. Last year, it introduced a video streaming service to bolster its app to grow beyond merely being a repository. It has already commissioned the production of several original shows.

In recent months, it has also integrated Gaana, the largest local music streaming app that is also owned by Times Internet. Now its parent company, which rivals Google and Facebook on some fronts, is planning to add mini games to MX Player, a person familiar with the matter said, to give it additional reach and appeal.

Some of these apps, especially those that have amassed tens of millions of users, have a real shot at diversifying their offerings, analyst Kolla said. There is a bar of entry, though. A huge user base that engages with a product on a daily basis is a must for any company if it is to explore chasing the super app status, he added.

Indeed, there are examples of companies that had the vision to see the benefits of super apps but simply couldn’t muster the requisite user base. As mentioned, Snapdeal tried and failed at expanding its app’s offerings. Messaging service Hike, which was valued at more than $1 billion two years ago and includes WeChat parent Tencent among its investors, added games and other features to its app, but ultimately saw poor engagement. Its new strategy is the reverse: to break its app into multiple pieces.

“In 2019, we continue to double down on both social and content but we’re going to do it with an evolved approach. We’re going to do it across multiple apps. That means, in 2019 we’re going to go from building a super app that encompasses everything, to Multiple Apps solving one thing really well. Yes, we’re unbundling Hike,” Kavin Mittal, founder and CEO of Hike, wrote in an update published earlier this year.

And Reliance Jio, of course

For the rest, the race is still on, but there are big horses waiting to enter to add further competition.

Reliance Jio, a subsidiary of conglomerate Reliance Industry that is owned by India’s richest man, Mukesh Ambani, is planning to introduce a super app that will host more than 100 features, according to a person familiar with the matter. Local media first reported the development.

It will be fascinating to see how that works out. Reliance Jio, which almost single-handedly disrupted the telecom industry in India with its low-cost data plans and free voice calls, has amassed tens of millions of users on the bouquet of apps that it offers at no additional cost to Jio subscribers.

Beyond that diverse selection of homespun apps, Reliance has also taken an M&A-based approach to assemble the pieces of its super app strategy.

It bought music streaming service Saavn last year and quickly integrated it with its own music app JioMusic. Last month, it acquired Haptik, a startup that develops “conversational” platforms and virtual assistants, in a deal worth more than $100 million. It already has the user bases required. JioTV, an app that offers access to over 500 TV channels; and JioNews, an app that additionally offers hundreds of magazines and newspapers, routinely appear among the top apps in Google Play Store.

India’s super app revolution is in its early days, but the trend is surely one to keep an eye on as the country moves into its next chapter of internet usage.


Source: The Tech Crunch

Read More

GitHub gets a package registry

Posted by on May 10, 2019 in computing, Developer, Git, GitHub, Java, Javascript, npm, ruby, Software, TC, version control | 0 comments

GitHub today announced the launch of a limited beta of the GitHub Package Registry, its new package management service that lets developers publish public and private packages next to their source code.

To be clear, GitHub isn’t launching a competitor to tools like npm or RubyGems. What the company is launching, however, is a service that is compatible with these tools and allows developers to find and publish their own packages, using the same GitHub interface they use for their code. The new service is currently compatible with JavaScript (npm), Java (Maven), Ruby (RubyGems), .NET (NuGet) and Docker images, with support for other languages and tools to come.

GitHub Package Registry is compatible with common package management clients, so you can publish packages with your choice of tools,” Simina Pasat, director of Product Management at GitHub, explains in today’s announcement. “If your repository is more complex, you’ll be able to publish multiple packages of different types. And, with webhooks or with GitHub Actions, you can fully customize your publishing and post-publishing workflows.”With this, businesses can then also provide their employees with a single set of credentials to manage both their code and packages — and this new feature makes it easy to create a set of approved packages, too. Users will also get download statistics and access to the entire history of the package on GitHub.

Most open-source packages already use GitHub to develop their code before they publish it to a public registry. GitHub argues that these developers can now also use the GitHub Package Registry to publish pre-release versions, for example.

Developers already often use GitHub to host their private repositories. After all, it makes sense to keep packages and code in the same place. What GitHub is doing here, to some degree, is formalize this practice and wrap a product around it.


Source: The Tech Crunch

Read More

Cisco open sources MindMeld conversational AI platform

Posted by on May 9, 2019 in Artificial Intelligence, Cisco, Developer, Enterprise, MindMeld, open source, TC, voice recognition | 0 comments

Cisco announced today that it was open-sourcing the MindMeld conversation AI platform, making it available to anyone who wants to use it under the Apache 2.0 license.

MindMeld is the conversational AI company that Cisco bought in 2017. The company put the technology to use in Cisco Spark Assistant later that year to help bring voice commands to meeting hardware, which was just beginning to emerge at the time.

Today, there is a concerted effort to bring voice to enterprise use cases, and Cisco is offering the means for developers to do that with the MindMeld tool set. “Today, Cisco is taking a big step towards empowering developers with more comprehensive and practical tools for building conversational applications by open-sourcing the MindMeld Conversational AI Platform,” Cisco’s head of machine learning Karthik Raghunathan wrote in a blog post.

The company also wants to make it easier for developers to get going with the platform, so it is releasing the Conversational AI Playbook, a step-by-step guide book to help developers get started with conversation-driven applications. Cisco says this is about empowering developers, and that’s probably a big part of the reason.

But it would also be in Cisco’s best interest to have developers outside of Cisco working with and on this set of tools. By open-sourcing them, the hope is that a community of developers, whether Cisco customers or others, will begin using, testing and improving the tools; helping it to develop the platform faster and more broadly than it could, even inside an organization as large as Cisco.

Of course, just because they offer it doesn’t necessarily automatically mean the community of interested developers will emerge, but given the growing popularity of voice-enabled used cases, chances are some will give it a look. It will be up to Cisco to keep them engaged.

Cisco is making all of this available on its own DevNet platform starting today.


Source: The Tech Crunch

Read More

Diving into Google Cloud Next and the future of the cloud ecosystem

Posted by on Apr 14, 2019 in Artificial Intelligence, Cloud, Developer, Enterprise, Events, Government, Personnel, SaaS, Startups, Talent, TC | 0 comments

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Frederic Lardinois and Ron Miller offered up their analysis on the major announcements that came out of Google’s Cloud Next conference this past week, as well as their opinions on the outlook for the company going forward.

Google Cloud announced a series of products, packages and services that it believes will improve the company’s competitive position and differentiate itself from AWS and other peers. Frederic and Ron discuss all of Google’s most promising announcements, including its product for managing hybrid clouds, its new end-to-end AI platform, as well as the company’s heightened effort to improve customer service, communication, and ease-of-use.

“They have all of these AI and machine learning technologies, they have serverless technologies, they have containerization technologies — they have this whole range of technologies.

But it’s very difficult for the average company to take these technologies and know what to do with them, or to have the staff and the expertise to be able to make good use of them. So, the more they do things like this where they package them into products and make them much more accessible to the enterprise at large, the more successful that’s likely going to be because people can see how they can use these.

…Google does have thousands of engineers, and they have very smart people, but not every company does, and that’s the whole idea of the cloud. The cloud is supposed to take this stuff, put it together in such a way that you don’t have to be Google, or you don’t have to be Facebook, you don’t have to be Amazon, and you can take the same technology and put it to use in your company”

Image via Bryce Durbin / TechCrunch

Frederic and Ron dive deeper into how the new offerings may impact Google’s market share in the cloud ecosystem and which verticals represent the best opportunity for Google to win. The two also dig into the future of open source in cloud and how they see customer use cases for cloud infrastructure evolving.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 


Source: The Tech Crunch

Read More

Vizion.ai launches its managed Elasticsearch service

Posted by on Mar 28, 2019 in Amazon Web Services, api, Artificial Intelligence, Caching, cloud computing, computing, Developer, Elastic, Elasticsearch, Enterprise, ML, TC, world wide web | 0 comments

Setting up Elasticsearch, the open-source system that many companies large and small use to power their distributed search and analytics engines, isn’t the hardest thing. What is very hard, though, is to provision the right amount of resources to run the service, especially when your users’ demand comes in spikes, without overpaying for unused capacity. Vizion.ai’s new Elasticsearch Service does away with all of this by essentially offering Elasticsearch as a service and only charging its customers for the infrastructure they use.

Vizion.ai’s service automatically scales up and down as needed. It’s a managed service and delivered as a SaaS platform that can support deployments on both private and public clouds, with full API compatibility with the standard Elastic stack that typically includes tools like Kibana for visualizing data, Beats for sending data to the service and Logstash for transforming the incoming data and setting up data pipelines. Users can easily create several stacks for testing and development, too, for example.

Vizion.ai GM and VP Geoff Tudor

“When you go into the AWS Elasticsearch service, you’re going to be looking at dozens or hundreds of permutations for trying to build your own cluster,” Vision.ai’s VP and GM Geoff Tudor told me. “Which instance size? How many instances? Do I want geographical redundancy? What’s my networking? What’s my security? And if you choose wrong, then that’s going to impact the overall performance. […] We do balancing dynamically behind that infrastructure layer.” To do this, the service looks at the utilization patterns of a given user and then allocates resources to optimize for the specific use case.

What VVizion.ai hasdone here is take some of the work from its parent company Panzura, a multi-cloud storage service for enterprises that has plenty of patents around data caching, and applied it to this new Elasticsearch service.

There are obviously other companies that offer commercial Elasticsearch platforms already. Tudor acknowledges this, but argues that his company’s platform is different. With other products, he argues, you have to decide on the size of your block storage for your metadata upfront, for example, and you typically want SSDs for better performance, which can quickly get expensive. Thanks to Panzura’s IP, Vizion.ai is able to bring down the cost by caching recent data on SSDs and keeping the rest in cheaper object storage pools.

He also noted that the company is positioning the overall Vizion.ai service, with the Elasticsearch service as one of the earliest components, as a platform for running AI and ML workloads. Support for TensorFlow, PredictionIO (which plays nicely with Elasticsearch) and other tools is also in the works. “We want to make this an easy serverless ML/AI consumption in a multi-cloud fashion, where not only can you leverage the compute, but you can also have your storage of record at a very cost-effective price point.”


Source: The Tech Crunch

Read More

Scaleway updates its high-performance instances

Posted by on Mar 12, 2019 in Cloud, Developer, Europe, Scaleway | 0 comments

Cloud-hosting company Scaleway refreshed its lineup of high-performance instances today. These instances are now all equipped with AMD EPYC CPUs, DDR4 RAM and NVMe SSD storage. The more you pay, the more computing power, RAM, storage and bandwidth you get.

High-performance plans start at €0.078 per hour or €39 per month ($44.20), whichever is lower at the end of the month. For this price you get 4 cores, 16GB of RAM, 150GB of storage and 400Mbps of bandwidth.

If you double the price, you get twice as many cores, RAM and storage. Higher plans get a tiny discount on performance bumps. And the fastest instance comes with 48 cores, 256GB of RAM, 600GB of storage and 2Gbps of bandwidth. That beast can cost as much as €569 per month ($645).

Here’s the full lineup:

Scaleway had high-performance instances in the past, called “X64” instances. They were relatively cheaper. Despite that price bump, Scaleway manages to stay competitive against Linode, DigitalOcean and others.

A server with 6 CPU cores and 16GB of RAM costs $80 per month on Linode. After that, you have to choose between high memory plans and dedicated CPU plans, so it’s harder to compare.

On DigitalOcean, an instance with 16GB of RAM and 4 CPU cores costs $120 per month. The most expensive instance costs $1,200 per month, and it doesn’t match the specifications of Scaleway’s most expensive instance.


Source: The Tech Crunch

Read More