Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Vizion.ai launches its managed Elasticsearch service

Posted by on Mar 28, 2019 in Amazon Web Services, api, Artificial Intelligence, Caching, cloud computing, computing, Developer, Elastic, Elasticsearch, Enterprise, ML, TC, world wide web | 0 comments

Setting up Elasticsearch, the open-source system that many companies large and small use to power their distributed search and analytics engines, isn’t the hardest thing. What is very hard, though, is to provision the right amount of resources to run the service, especially when your users’ demand comes in spikes, without overpaying for unused capacity. Vizion.ai’s new Elasticsearch Service does away with all of this by essentially offering Elasticsearch as a service and only charging its customers for the infrastructure they use.

Vizion.ai’s service automatically scales up and down as needed. It’s a managed service and delivered as a SaaS platform that can support deployments on both private and public clouds, with full API compatibility with the standard Elastic stack that typically includes tools like Kibana for visualizing data, Beats for sending data to the service and Logstash for transforming the incoming data and setting up data pipelines. Users can easily create several stacks for testing and development, too, for example.

Vizion.ai GM and VP Geoff Tudor

“When you go into the AWS Elasticsearch service, you’re going to be looking at dozens or hundreds of permutations for trying to build your own cluster,” Vision.ai’s VP and GM Geoff Tudor told me. “Which instance size? How many instances? Do I want geographical redundancy? What’s my networking? What’s my security? And if you choose wrong, then that’s going to impact the overall performance. […] We do balancing dynamically behind that infrastructure layer.” To do this, the service looks at the utilization patterns of a given user and then allocates resources to optimize for the specific use case.

What VVizion.ai hasdone here is take some of the work from its parent company Panzura, a multi-cloud storage service for enterprises that has plenty of patents around data caching, and applied it to this new Elasticsearch service.

There are obviously other companies that offer commercial Elasticsearch platforms already. Tudor acknowledges this, but argues that his company’s platform is different. With other products, he argues, you have to decide on the size of your block storage for your metadata upfront, for example, and you typically want SSDs for better performance, which can quickly get expensive. Thanks to Panzura’s IP, Vizion.ai is able to bring down the cost by caching recent data on SSDs and keeping the rest in cheaper object storage pools.

He also noted that the company is positioning the overall Vizion.ai service, with the Elasticsearch service as one of the earliest components, as a platform for running AI and ML workloads. Support for TensorFlow, PredictionIO (which plays nicely with Elasticsearch) and other tools is also in the works. “We want to make this an easy serverless ML/AI consumption in a multi-cloud fashion, where not only can you leverage the compute, but you can also have your storage of record at a very cost-effective price point.”


Source: The Tech Crunch

Read More

Google’s still not sharing cloud revenue

Posted by on Feb 5, 2019 in Alphabet, Cloud, cloud computing, cloud revenue, Diane Greene, Earnings, Enterprise, G Suite, Google, google cloud platform, ruth porat, Sundar Pinchai | 0 comments

Google has shared its cloud revenue exactly once over the last several years. Silence tends to lead to speculation to fill the information vacuum. Luckily there are some analyst firms who try to fill the void, and it looks like Google’s cloud business is actually trending in the right direction, even if they aren’t willing to tell us an exact number.

When Google last reported its cloud revenue, last year about this time, they indicated they had earned $1 billion in revenue for the quarter, which included Google Cloud Platform and G Suite combined. Diane Greene, who was head of Google Cloud at the time, called it an “elite business.” but in reality it was pretty small potatoes compared to Microsoft’s and Amazon’s cloud numbers, which were pulling in $4-$5 billion a quarter between them at the time. Google was looking at a $4 billion run rate for the entire year.

Google apparently didn’t like the reaction it got from that disclosure so it stopped talking about cloud revenue. Yesterday when Google’s parent company, Alphabet, issued its quarterly earnings report, to nobody’s surprise, it failed to report cloud revenue yet again, at least not directly.

Google CEO Sundar Pichai gave some hints, but never revealed an exact number. Instead he talked in vague terms calling Google Cloud “a fast-growing multibillion-dollar business.” The only time he came close to talking about actual revenue was when he said, “Last year, we more than doubled both the number of Google Cloud Platform deals over $1 million as well as the number of multiyear contracts signed. We also ended the year with another milestone, passing 5 million paying customers for our cloud collaboration and productivity solution, G Suite.”

OK, it’s not an actual dollar figure, but it’s a sense that the company is actually moving the needle in the cloud business. A bit later in the call, CFO Ruth Porat threw in this cloud revenue nugget. “We are also seeing a really nice uptick in the number of deals that are greater than $100 million and really pleased with the success and penetration there. At this point, not updating further.” She is not updating further. Got it.

That brings us to a company that guessed for us, Canalys. While the firm didn’t share its methodology, it did come up with a figure of $2.2 billion for the quarter. Given that the company is closing larger deals and was at a billion last year, this figure feels like it’s probably in the right ballpark, but of course it’s not from the horse’s mouth, so we can’t know for certain.

Frankly, I’m a little baffled why Alphabet’s shareholders actually let the company get away with this complete lack of transparency. It seems like people would want to know exactly what they are making on that crucial part of the business, wouldn’t you? As a cloud market watcher, I know I would. So we’re left to companies like Canalys to fill in the blanks, but it’s certainly not as satisfying as Google actually telling us. Maybe next quarter.


Source: The Tech Crunch

Read More

Microsoft acquires Citus Data

Posted by on Jan 24, 2019 in citus data, Cloud, cloud computing, data management, databases, Enterprise, Exit, free software, M&A, Microsoft, nosql, postgresql, relational database, Startups, Y Combinator | 0 comments

Microsoft today announced that it has acquired Citus Data, a company that focused on making PostgreSQL databases faster and more scalable. Citus’ open-source PostgreSQL extension essentially turns the application into a distributed database and, while there has been a lot of hype around the NoSQL movement and document stores, relational databases — and especially PostgreSQL — are still a growing market, in part because of tools from companies like Citus that overcome some of their earlier limitations.

Unsurprisingly, Microsoft plans to work with the Citus Data team to “accelerate the delivery of key, enterprise-ready features from Azure to PostgreSQL and enable critical PostgreSQL workloads to run on Azure with confidence.” The Citus co-founders echo this in their own statement, noting that “as part of Microsoft, we will stay focused on building an amazing database on top of PostgreSQL that gives our users the game-changing scale, performance, and resilience they need. We will continue to drive innovation in this space.”

PostgreSQL is obviously an open-source tool, and while the fact that Microsoft is now a major open-source contributor doesn’t come as a surprise anymore, it’s worth noting that the company stresses that it will continue to work with the PostgreSQL community. In an email, a Microsoft spokesperson also noted that “the acquisition is a proof point in the company’s commitment to open source and accelerating Azure PostgreSQL performance and scale.”

Current Citus customers include the likes of real-time analytics service Chartbeat, email security service Agari and PushOwl, though the company notes that it also counts a number of Fortune 100 companies among its users (they tend to stay anonymous). The company offers both a database as a service, an on-premises enterprise version and the free open-source edition. For the time being, it seems like that’s not changing, though over time I would suspect that Microsoft will transition users of the hosted service to Azure.

The price of the acquisition was not disclosed. Citus Data, which was founded in 2010 and graduated from the Y Combinator program, previously raised more than $13 million from the likes of Khosla Ventures, SV Angel and Data Collective.


Source: The Tech Crunch

Read More

How open source software took over the world

Posted by on Jan 12, 2019 in apache, author, cloud computing, Cloudera, cockroach labs, Column, computing, Databricks, designer, executive, free software, Getty, GitHub, HashiCorp, hortonworks, IBM, linus torvalds, linux, Microsoft, microsoft windows, mongo, MongoDB, mulesoft, mysql, open source software, operating system, operating systems, oracle, red hat, RedHat, sap, Software, software as a service, TC, Yahoo | 0 comments

It was just 5 years ago that there was an ample dose of skepticism from investors about the viability of open source as a business model. The common thesis was that Redhat was a snowflake and that no other open source company would be significant in the software universe.

Fast forward to today and we’ve witnessed the growing excitement in the space: Redhat is being acquired by IBM for $32 billion (3x times its market cap from 2014); Mulesoft was acquired after going public for $6.5 billion; MongoDB is now worth north of $4 billion; Elastic’s IPO now values the company at $6 billion; and, through the merger of Cloudera and Hortonworks, a new company with a market cap north of $4 billion will emerge. In addition, there’s a growing cohort of impressive OSS companies working their way through the growth stages of their evolution: Confluent, HashiCorp, DataBricks, Kong, Cockroach Labs and many others. Given the relative multiples that Wall Street and private investors are assigning to these open source companies, it seems pretty clear that something special is happening.

So, why did this movement that once represented the bleeding edge of software become the hot place to be? There are a number of fundamental changes that have advanced open source businesses and their prospects in the market.

David Paul Morris/Bloomberg via Getty Images

From Open Source to Open Core to SaaS

The original open source projects were not really businesses, they were revolutions against the unfair profits that closed-source software companies were reaping. Microsoft, Oracle, SAP and others were extracting monopoly-like “rents” for software, which the top developers of the time didn’t believe was world class. So, beginning with the most broadly used components of software – operating systems and databases – progressive developers collaborated, often asynchronously, to author great pieces of software. Everyone could not only see the software in the open, but through a loosely-knit governance model, they added, improved and enhanced it.

The software was originally created by and for developers, which meant that at first it wasn’t the most user-friendly. But it was performant, robust and flexible. These merits gradually percolated across the software world and, over a decade, Linux became the second most popular OS for servers (next to Windows); MySQL mirrored that feat by eating away at Oracle’s dominance.

The first entrepreneurial ventures attempted to capitalize on this adoption by offering “enterprise-grade” support subscriptions for these software distributions. Redhat emerged the winner in the Linux race and MySQL (thecompany) for databases. These businesses had some obvious limitations – it was harder to monetize software with just support services, but the market size for OS’s and databases was so large that, in spite of more challenged business models, sizeable companies could be built.

The successful adoption of Linux and MySQL laid the foundation for the second generation of Open Source companies – the poster children of this generation were Cloudera and Hortonworks. These open source projects and businesses were fundamentally different from the first generation on two dimensions. First, the software was principally developed within an existing company and not by a broad, unaffiliated community (in the case of Hadoop, the software took shape within Yahoo!) . Second, these businesses were based on the model that only parts of software in the project were licensed for free, so they could charge customers for use of some of the software under a commercial license. The commercial aspects were specifically built for enterprise production use and thus easier to monetize. These companies, therefore, had the ability to capture more revenue even if the market for their product didn’t have quite as much appeal as operating systems and databases.

However, there were downsides to this second generation model of open source business. The first was that no company singularly held ‘moral authority’ over the software – and therefore the contenders competed for profits by offering increasing parts of their software for free. Second, these companies often balkanized the evolution of the software in an attempt to differentiate themselves. To make matters more difficult, these businesses were not built with a cloud service in mind. Therefore, cloud providers were able to use the open source software to create SaaS businesses of the same software base. Amazon’s EMR is a great example of this.

The latest evolution came when entrepreneurial developers grasped the business model challenges existent in the first two generations – Gen 1 and Gen 2 – of open source companies, and evolved the projects with two important elements. The first is that the open source software is now developed largely within the confines of businesses. Often, more than 90% of the lines of code in these projects are written by the employees of the company that commercialized the software. Second, these businesses offer their own software as a cloud service from very early on. In a sense, these are Open Core / Cloud service hybrid businesses with multiple pathways to monetize their product. By offering the products as SaaS, these businesses can interweave open source software with commercial software so customers no longer have to worry about which license they should be taking. Companies like Elastic, Mongo, and Confluent with services like Elastic Cloud, Confluent Cloud, and MongoDB Atlas are examples of this Gen 3.  The implications of this evolution are that open source software companies now have the opportunity to become the dominant business model for software infrastructure.

The Role of the Community

While the products of these Gen 3 companies are definitely more tightly controlled by the host companies, the open source community still plays a pivotal role in the creation and development of the open source projects. For one, the community still discovers the most innovative and relevant projects. They star the projects on Github, download the software in order to try it, and evangelize what they perceive to be the better project so that others can benefit from great software. Much like how a good blog post or a tweet spreads virally, great open source software leverages network effects. It is the community that is the source of promotion for that virality.

The community also ends up effectively being the “product manager” for these projects. It asks for enhancements and improvements; it points out the shortcomings of the software. The feature requests are not in a product requirements document, but on Github, comments threads and Hacker News. And, if an open source project diligently responds to the community, it will shape itself to the features and capabilities that developers want.

The community also acts as the QA department for open source software. It will identify bugs and shortcomings in the software; test 0.x versions diligently; and give the companies feedback on what is working or what is not.  The community will also reward great software with positive feedback, which will encourage broader use.

What has changed though, is that the community is not as involved as it used to be in the actual coding of the software projects. While that is a drawback relative to Gen 1 and Gen 2 companies, it is also one of the inevitable realities of the evolving business model.

Linus Torvalds was the designer of the open-source operating system Linux.

Rise of the Developer

It is also important to realize the increasing importance of the developer for these open source projects. The traditional go-to-market model of closed source software targeted IT as the purchasing center of software. While IT still plays a role, the real customers of open source are the developers who often discover the software, and then download and integrate it into the prototype versions of the projects that they are working on. Once “infected”by open source software, these projects work their way through the development cycles of organizations from design, to prototyping, to development, to integration and testing, to staging, and finally to production. By the time the open source software gets to production it is rarely, if ever, displaced. Fundamentally, the software is never “sold”; it is adopted by the developers who appreciate the software more because they can see it and use it themselves rather than being subject to it based on executive decisions.

In other words, open source software permeates itself through the true experts, and makes the selection process much more grassroots than it has ever been historically. The developers basically vote with their feet. This is in stark contrast to how software has traditionally been sold.

Virtues of the Open Source Business Model

The resulting business model of an open source company looks quite different than a traditional software business. First of all, the revenue line is different. Side-by-side, a closed source software company will generally be able to charge more per unit than an open source company. Even today, customers do have some level of resistance to paying a high price per unit for software that is theoretically “free.” But, even though open source software is lower cost per unit, it makes up the total market size by leveraging the elasticity in the market. When something is cheaper, more people buy it. That’s why open source companies have such massive and rapid adoption when they achieve product-market fit.

Another great advantage of open source companies is their far more efficient and viral go-to-market motion. The first and most obvious benefit is that a user is already a “customer” before she even pays for it. Because so much of the initial adoption of open source software comes from developers organically downloading and using the software, the companies themselves can often bypass both the marketing pitch and the proof-of-concept stage of the sales cycle. The sales pitch is more along the lines of, “you already use 500 instances of our software in your environment, wouldn’t you like to upgrade to the enterprise edition and get these additional features?”  This translates to much shorter sales cycles, the need for far fewer sales engineers per account executive, and much quicker payback periods of the cost of selling. In fact, in an ideal situation, open source companies can operate with favorable Account Executives to Systems Engineer ratios and can go from sales qualified lead (SQL) to closed sales within one quarter.

This virality allows for open source software businesses to be far more efficient than traditional software businesses from a cash consumption basis. Some of the best open source companies have been able to grow their business at triple-digit growth rates well into their life while  maintaining moderate of burn rates of cash. This is hard to imagine in a traditional software company. Needless to say, less cash consumption equals less dilution for the founders.

Photo courtesy of Getty Images

Open Source to Freemium

One last aspect of the changing open source business that is worth elaborating on is the gradual movement from true open source to community-assisted freemium. As mentioned above, the early open source projects leveraged the community as key contributors to the software base. In addition, even for slight elements of commercially-licensed software, there was significant pushback from the community. These days the community and the customer base are much more knowledgeable about the open source business model, and there is an appreciation for the fact that open source companies deserve to have a “paywall” so that they can continue to build and innovate.

In fact, from a customer perspective the two value propositions of open source software are that you a) read the code; b) treat it as freemium. The notion of freemium is that you can basically use it for free until it’s deployed in production or in some degree of scale. Companies like Elastic and Cockroach Labs have gone as far as actually open sourcing all their software but applying a commercial license to parts of the software base. The rationale being that real enterprise customers would pay whether the software is open or closed, and they are more incentivized to use commercial software if they can actually read the code. Indeed, there is a risk that someone could read the code, modify it slightly, and fork the distribution. But in developed economies – where much of the rents exist anyway, it’s unlikely that enterprise companies will elect the copycat as a supplier.

A key enabler to this movement has been the more modern software licenses that companies have either originally embraced or migrated to over time. Mongo’s new license, as well as those of Elastic and Cockroach are good examples of these. Unlike the Apache incubated license – which was often the starting point for open source projects a decade ago, these licenses are far more business-friendly and most model open source businesses are adopting them.

The Future

When we originally penned this article on open source four years ago, we aspirationally hoped that we would see the birth of iconic open source companies. At a time where there was only one model – Redhat – we believed that there would be many more. Today, we see a healthy cohort of open source businesses, which is quite exciting. I believe we are just scratching the surface of the kind of iconic companies that we will see emerge from the open source gene pool. From one perspective, these companies valued in the billions are a testament to the power of the model. What is clear is that open source is no longer a fringe approach to software. When top companies around the world are polled, few of them intend to have their core software systems be anything but open source. And if the Fortune 5000 migrate their spend on closed source software to open source, we will see the emergence of a whole new landscape of software companies, with the leaders of this new cohort valued in the tens of billions of dollars.

Clearly, that day is not tomorrow. These open source companies will need to grow and mature and develop their products and organization in the coming decade. But the trend is undeniable and here at Index we’re honored to have been here for the early days of this journey.


Source: The Tech Crunch

Read More

Google’s Cloud Spanner database adds new features and regions

Posted by on Dec 19, 2018 in Apache Hadoop, Asia, Cloud, cloud computing, Cloud Spanner, Developer, Enterprise, Europe, google cloud, google cloud platform, Iowa, relational database, TC | 0 comments

Cloud Spanner, Google’s globally distributed relational database service, is getting a bit more distributed today with the launch of a new region and new ways to set up multi-region configurations. The service is also getting a new feature that gives developers deeper insights into their most resource-consuming queries.

With this update, Google is adding to the Cloud Spanner lineup Hong Kong (asia-east2), its newest data center location. With this, Cloud Spanner is now available in 14 out of 18 Google Cloud Platform (GCP) regions, including seven the company added this year alone. The plan is to bring Cloud Spanner to every new GCP region as they come online.

The other new region-related news is the launch of two new configurations for multi-region coverage. One, called eur3, focuses on the European Union, and is obviously meant for users there who mostly serve a local customer base. The other is called nam6 and focuses on North America, with coverage across both costs and the middle of the country, using data centers in Oregon, Los Angeles, South Carolina and Iowa. Previously, the service only offered a North American configuration with three regions and a global configuration with three data centers spread across North America, Europe and Asia.

While Cloud Spanner is obviously meant for global deployments, these new configurations are great for users who only need to serve certain markets.

As far as the new query features are concerned, Cloud Spanner is now making it easier for developers to view, inspect and debug queries. The idea here is to give developers better visibility into their most frequent and expensive queries (and maybe make them less expensive in the process).

In addition to the Cloud Spanner news, Google Cloud today announced that its Cloud Dataproc Hadoop and Spark service now supports the R language, in addition to Python 3.7 support on App Engine.


Source: The Tech Crunch

Read More

Putting the band back together, ExactTarget execs reunite to launch MetaCX

Posted by on Dec 6, 2018 in alpha, api, business software, chief technology officer, cloud applications, cloud computing, computing, customer relationship management, exacttarget, indianapolis, Kobie Fuller, Los Angeles, Marketing, pilot, president, Salesforce Marketing Cloud, salesforce.com, scott dorsey, software as a service, TC, upfront ventures | 0 comments

Scott McCorkle has spent most of his professional career thinking about business to business software and how to improve it for a company’s customers.

The former President of ExactTarget and later chief executive of Salesforce Marketing Cloud has made billions of dollars building products to help support customer service and now he’s back at it again with his latest venture MetaCX.

Alongside Jake Miller, the former chief engineering lead at Salesforce Marketing Cloud and chief technology officer at ExactTarget, and David Duke, the chief customer officer and another ExactTarget alumnus, McCorkle has raised $14 million to build a white-labeled service that offers a toolkit for monitoring, managing and supporting customers as they use new software tools.

If customers are doing the things i want them to be doing through my product. What is it that they want to achieve and why did they buy my product.

“MetaCX sits above any digital product,” McCorkle says. And its software monitors and manages the full spectrum of the customer relationship with that product. “It is API embeddable and we have a full user experience layer.”

For the company’s customers, MetaCX provides a dashboard that includes outcomes, the collaboration, metrics tracked as part of the relationship and all the metrics around that are part of that engagement layer,” says McCorkle.

The first offerings will be launching in the beginning of 2019, but the company has dozens of customers already using its pilot, McCorkle said.

The Indianapolis -based company is one of the latest spinouts from High Alpha Studio, an accelerator and venture capital studio formed by Scott Dorsey, the former chief executive officer of ExactTarget. As one of a crop of venture investment firms and studios cropping up in the Midwest, High Alpha is something of a bellwether for the viability of the venture model in emerging ecosystems. And, from that respect, the success of the MetaCX round speaks volumes. Especially since the round was led by the Los Angeles-based venture firm Upfront Ventures.

“Our founding team includes world-class engineers, designers and architects who have been building billion-dollar SaaS products for two decades,” said McCorkle, in a statement. “We understand that enterprises often struggle to achieve the business outcomes they expect from SaaS, and the renewal process for SaaS suppliers is often an ambiguous guessing game. Our industry is shifting from a subscription economy to a performance economy, where suppliers and buyers of digital products need to transparently collaborate to achieve outcomes.”

As a result of the investment, Upfront partner Kobie Fuller will be taking a seat on the MetaCX board of directors alongside McCorkle and Dorsey.

“The MetaCX team is building a truly disruptive platform that will inject data-driven transparency, commitment and accountability against promised outcomes between SaaS buyers and vendors,” said Fuller, in a statement. “Having been on the journey with much of this team while shaping the martech industry with ExactTarget, I’m incredibly excited to partner again in building another category-defining business with Scott and his team in Indianapolis.”

 


Source: The Tech Crunch

Read More

AWS launches Arm-based servers for EC2

Posted by on Nov 27, 2018 in Amazon Web Services, amd, ARM, AWS, AWS re:Invent 2018, Cloud, cloud computing, Developer, linux, operating system, operating systems, TC, Ubuntu, web servers | 0 comments

At its re:Invent conference in Las Vegas, AWS today announced the launch of Arm-based servers for its EC2 cloud computing service. These aren’t run-of-the-mill Arm chips, though. AWS took the standard Arm cores and then customized them to fit its needs.The company says that its so-called AWS Graviton Processors have been optimized for performance and cost, with a focus on scale-out workloads that can be spread across a number of smaller instances (think containerized microservices, web servers, caching fleets, etc.).

The first set of instances, called A1, is now available in a number of AWS regions in the U.S. and Europe. They support all of AWS’s standard instance pricing models, including on-demand, reserved instance, spot instance, dedicated instance and dedicated host.

For now, you can only use Amazon Linux 2, RHEL and Ubuntu as operating systems for these machines, but AWS promises that additional operating system support will launch in the future.

Because these are ARM servers, you’ll obviously have to recompile any native code for them before you can run your applications on them. Virtually any application that is written in a scripting language, though, will probably run without any modifications.

Prices for these instances start at $0.0255/hour for an a1.medium machine with 1 CPU and 2 GiB of RAM and go up to $0.4080/hour for machines with 16 CPUs and 32 GiB of RAM. That’s maybe not as cheap as you would’ve expected given that an X86-based t3.nano server starts at $0.0052/hour, but you can always save quite a bit by using spot instances, of course. Until we see some benchmarks, though, it’s hard to compare these different machine types anyway.

As Amazon’s Jeff Barr notes in today’s announcement, the company’s move to its so-called Nitro System now allows it to launch new instance types at a faster clip. Nitro essentially provides the building blocks for creating new instance types that the team can then mix and match as needed.

It’s worth noting that AWS also launched support for AMD EPYC processors earlier this month.

more AWS re:Invent 2018 coverage


Source: The Tech Crunch

Read More

OpenStack’s latest release focuses on bare metal clouds and easier upgrades

Posted by on Aug 30, 2018 in Cloud, cloud computing, Enterprise, enterprise software, openstack, openstack foundation, TC | 0 comments

The OpenStack Foundation today released the 18th version of its namesake open-source cloud infrastructure software. The project has had its ups and downs, but it remains the de facto standard for running and managing large private clouds.

What’s been interesting to watch over the years is how the project’s releases have mirrored what’s been happening in the wider world of enterprise software. The core features of the platform (compute, storage, networking) are very much in place at this point, allowing the project to look forward and to add new features that enterprises are now requesting.

The new release, dubbed Rocky, puts an emphasis on bare metal clouds, for example. While the majority of enterprises still run their workloads in virtual machines, a lot of them are now looking at containers as an alternative with less overhead and the promise of faster development cycles. Many of these enterprises want to run those containers on bare metal clouds and the project is reacting to this with its “Ironic” project that offers all of the management and automation features necessary to run these kinds of deployments.

“There’s a couple of big features that landed in Ironic in the Rocky release cycle that we think really set it up well for OpenStack bare metal clouds to be the foundation for both running VMs and containers,” OpenStack Foundation VP of marketing and community Lauren Sell told me. 

Ironic itself isn’t new, but in today’s update, Ironic gets user-managed BIOS settings (to configure power management, for example) and RAM disk support for high-performance computing workloads. Magnum, OpenStack’s service for using container engines like Docker Swarm, Apache Mesos and Kubernetes, is now also a Kubernetes certified installer, meaning that users can be confident that OpenStack and Kubernetes work together just like a user would expect.

Another trend that’s becoming quite apparent is that many enterprises that build their own private clouds do so because they have very specific hardware needs. Often, that includes GPUs and FPGAs, for example, for machine learning workloads. To make it easier for these businesses to use OpenStack, the project now includes a lifecycle management service for these kinds of accelerators.

“Specialized hardware is getting a lot of traction right now,” OpenStack CTO Mark Collier noted. “And what’s interesting is that FPGAs have been around for a long time but people are finding out that they are really useful for certain types of AI, because they’re really good at doing the relatively simple math that you need to repeat over and over again millions of times. It’s kind of interesting to see this kind of resurgence of certain types of hardware that maybe was seen as going to be disrupted by cloud and now it’s making a roaring comeback.”

With this update, the OpenStack project is also enabling easier upgrades, something that was long a daunting process for enterprises. Because it was so hard, many chose to simply not update to the latest releases and often stayed a few releases behind. Now, the so-called Fast Forward Upgrade feature allows these users to get on new releases faster, even if they are well behind the project’s own cycle. Oath, which owns TechCrunch, runs a massive OpenStack cloud, for example, and the team recently upgraded a 20,000-core deployment from Juno (the 10th OpenStack release) to Ocata (the 15th release).

The fact that Vexxhost, a Canadian cloud provider, is already offering support for the Rocky release in its new Silicon Valley cloud today is yet another sign that updates are getting a bit easier (and the whole public cloud side of OpenStack, too, often gets overlooked, but continues to grow).


Source: The Tech Crunch

Read More

Storage provider Cloudian raises $94M

Posted by on Aug 29, 2018 in alpha, Artificial Intelligence, Cloud, cloud computing, cloud storage, Cloudian, computing, data management, Enterprise, funding, Goldman Sachs, healthcare, information, machine learning, medical imaging, NTT Docomo Ventures, petabyte, Storage | 0 comments

Cloudian, a company that specializes in helping businesses store petabytes of data, today announced that it has raised a $94 million Series E funding round. Investors in this round, which is one of the largest we have seen for a storage vendor, include Digital Alpha, Fidelity Eight Roads, Goldman Sachs, INCJ, JPIC (Japan Post Investment Corporation), NTT DOCOMO Ventures and WS Investments. This round includes a $25 million investment from Digital Alpha, which was first announced earlier this year.

With this, the seven-year-old company has now raised a total of $174 million.

As the company told me, it now has about 160 employees and 240 enterprise customers. Cloudian has found its sweet spot in managing the large video archives of entertainment companies, but its customers also include healthcare companies, automobile manufacturers and Formula One teams.

What’s important to stress here is that Cloudian’s focus is on on-premise storage, not cloud storage, though it does offer support for multi-cloud data management, as well. “Data tends to be most effectively used close to where it is created and close to where it’s being used,” Cloudian VP of worldwide sales Jon Ash told me. “That’s because of latency, because of network traffic. You can almost always get better performance, better control over your data if it is being stored close to where it’s being used.” He also noted that it’s often costly and complex to move that data elsewhere, especially when you’re talking about the large amounts of information that Cloudian’s customers need to manage.

Unsurprisingly, companies that have this much data now want to use it for machine learning, too, so Cloudian is starting to get into this space, as well. As Cloudian CEO and co-founder Michael Tso also told me, companies are now aware that the data they pull in, no matter whether that’s from IoT sensors, cameras or medical imaging devices, will only become more valuable over time as they try to train their models. If they decide to throw the data away, they run the risk of having nothing with which to train their models.

Cloudian plans to use the new funding to expand its global sales and marketing efforts and increase its engineering team. “We have to invest in engineering and our core technology, as well,” Tso noted. “We have to innovate in new areas like AI.”

As Ash also stressed, Cloudian’s business is really data management — not just storage. “Data is coming from everywhere and it’s going everywhere,” he said. “The old-school storage platforms that were siloed just don’t work anywhere.”


Source: The Tech Crunch

Read More

VMware pulls AWS’s Relational Database Service into the data center

Posted by on Aug 27, 2018 in Amazon Web Services, Andy Jassy, ceo, cloud computing, computing, Microsoft, mysql, oracle, postgresql, relational database, TC, vmware | 0 comments

Here’s some unusual news: AWS, Amazon’s cloud computing arm, today announced that it plans to bring its Relational Database Service (RDS) to VMware, no matter whether that’s VMware Cloud on AWS or a privately hosted VMware deployment in a corporate data center.

While some of AWS’s competitors have long focused on these kinds of hybrid cloud deployments, AWS never really put the same kind of emphasis on this. Clearly, though, that’s starting to change — maybe in part because Microsoft and others are doing quite well in this space.

“Managing the administrative and operational muck of databases is hard work, error-prone and resource intensive,” said AWS CEO Andy Jassy . “It’s why hundreds of thousands of customers trust Amazon RDS to manage their databases at scale. We’re excited to bring this same operationally battle-tested service to VMware customers’ on-premises and hybrid environments, which will not only make database management much easier for enterprises, but also make it simpler for these databases to transition to the cloud.”

With Amazon RDS on VMware, enterprises will be able to use AWS’s technology to run and manage Microsoft SQL Server, Oracle, PostgreSQL, MySQL and MariaDB databases in their own data centers. The idea here, AWS says, is to make it easy for enterprises to set up and manage their databases wherever they want to host their data — and to then migrate it to AWS when they choose to do so.

This new service will soon be in private preview, so we don’t know all that much about how this will work in practice or what it will cost. AWS promises, however, that the experience will pretty much be the same as in the cloud and that RDS on VMware will handle all the updates and patches automatically.

Today’s announcement comes about two years after the launch of VMware Cloud on AWS, which was pretty much the reverse of today’s announcement. With VMware Cloud on AWS, enterprises can take their existing VMware deployments and take them to AWS.


Source: The Tech Crunch

Read More