Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

OpenStack’s latest release focuses on bare metal clouds and easier upgrades

Posted by on Aug 30, 2018 in Cloud, cloud computing, Enterprise, enterprise software, openstack, openstack foundation, TC | 0 comments

The OpenStack Foundation today released the 18th version of its namesake open-source cloud infrastructure software. The project has had its ups and downs, but it remains the de facto standard for running and managing large private clouds.

What’s been interesting to watch over the years is how the project’s releases have mirrored what’s been happening in the wider world of enterprise software. The core features of the platform (compute, storage, networking) are very much in place at this point, allowing the project to look forward and to add new features that enterprises are now requesting.

The new release, dubbed Rocky, puts an emphasis on bare metal clouds, for example. While the majority of enterprises still run their workloads in virtual machines, a lot of them are now looking at containers as an alternative with less overhead and the promise of faster development cycles. Many of these enterprises want to run those containers on bare metal clouds and the project is reacting to this with its “Ironic” project that offers all of the management and automation features necessary to run these kinds of deployments.

“There’s a couple of big features that landed in Ironic in the Rocky release cycle that we think really set it up well for OpenStack bare metal clouds to be the foundation for both running VMs and containers,” OpenStack Foundation VP of marketing and community Lauren Sell told me. 

Ironic itself isn’t new, but in today’s update, Ironic gets user-managed BIOS settings (to configure power management, for example) and RAM disk support for high-performance computing workloads. Magnum, OpenStack’s service for using container engines like Docker Swarm, Apache Mesos and Kubernetes, is now also a Kubernetes certified installer, meaning that users can be confident that OpenStack and Kubernetes work together just like a user would expect.

Another trend that’s becoming quite apparent is that many enterprises that build their own private clouds do so because they have very specific hardware needs. Often, that includes GPUs and FPGAs, for example, for machine learning workloads. To make it easier for these businesses to use OpenStack, the project now includes a lifecycle management service for these kinds of accelerators.

“Specialized hardware is getting a lot of traction right now,” OpenStack CTO Mark Collier noted. “And what’s interesting is that FPGAs have been around for a long time but people are finding out that they are really useful for certain types of AI, because they’re really good at doing the relatively simple math that you need to repeat over and over again millions of times. It’s kind of interesting to see this kind of resurgence of certain types of hardware that maybe was seen as going to be disrupted by cloud and now it’s making a roaring comeback.”

With this update, the OpenStack project is also enabling easier upgrades, something that was long a daunting process for enterprises. Because it was so hard, many chose to simply not update to the latest releases and often stayed a few releases behind. Now, the so-called Fast Forward Upgrade feature allows these users to get on new releases faster, even if they are well behind the project’s own cycle. Oath, which owns TechCrunch, runs a massive OpenStack cloud, for example, and the team recently upgraded a 20,000-core deployment from Juno (the 10th OpenStack release) to Ocata (the 15th release).

The fact that Vexxhost, a Canadian cloud provider, is already offering support for the Rocky release in its new Silicon Valley cloud today is yet another sign that updates are getting a bit easier (and the whole public cloud side of OpenStack, too, often gets overlooked, but continues to grow).


Source: The Tech Crunch

Read More

Boston-area startups are on pace to overtake NYC venture totals

Posted by on Aug 4, 2018 in Amazon, Atlas Venture, boston, Carbon Black, cargurus, Column, CRV, Demandware, enterprise software, HubSpot, Kensho, New York City, openview, PillPack, Rubius Therapeutics, San Francisco, Silicon Valley, Startups, Venture Capital, Wayfair | 0 comments

Boston has regained its longstanding place as the second-largest U.S. startup funding hub.

After years of trailing New York City in total annual venture investment, Massachusetts is taking the lead in 2018. Venture investment in the Boston metro area hit $5.2 billion so far this year, on track to be the highest annual total in years.

The Massachusetts numbers year-to-date are about 15 percent higher than the New York City total. That puts Boston’s biotech-heavy venture haul apparently second only to Silicon Valley among domestic locales thus far this year. And for New England VCs, the latest numbers also confirm already well-ingrained opinions about the superior talents of local entrepreneurs.

“Boston often gets dismissed as a has-been startup city. But the successes are often overlooked and don’t get the same attention as less successful, but more hypey companies in San Francisco,” Blake Bartlett, a partner at Boston-based venture firm OpenView, told Crunchbase News. He points to local success stories like online prescription service PillPack, which Amazon just snapped up for $1 billion, and online auto marketplace CarGurus, which went public in October and is now valued around $4.7 billion.

Meanwhile, fresh capital is piling up in the coffers of local startups with all the intensity of a New England snowstorm. In the chart below, we look at funding totals since 2012, along with reported round counts.

In the interest of rivalry, we are also showing how the Massachusetts startup ecosystem compares to New York over the past five years.

Who’s getting funded?

So what’s the reason for Boston’s 2018 successes? It’s impossible to pinpoint a single cause. The New England city’s startup scene is broad and has deep pockets of expertise in biotech, enterprise software, AI, consumer apps and other areas.

Still, we’d be remiss not to give biotech the lion’s share of the credit. So far this year, biotech and healthcare have led the New England dealmaking surge, accounting for the majority of invested capital. Once again, local investors are not surprised.

“Boston has been the center of the biotech universe forever,” said Dylan Morris, a partner at Boston and Silicon Valley-based VC firm CRV. That makes the city well-poised to be a leading hub in the sector’s latest funding and exit boom, which is capitalizing on a long-term shift toward more computational approaches to diagnosing and curing disease.

Moreover, it goes without saying that the home city of MIT has a particularly strong reputation for so-called deep tech — using really complicated technology to solve really hard problems. That’s reflected in the big funding rounds.

For instance, the largest Boston-based funding recipient of 2018, Moderna Therapeutics, is a developer of mRNA-based drugs that raised $625 million across two late-stage rounds. Besides Moderna, other big rounds for companies with a deep tech bent went to TCR2, which is focused on engineering T cells for cancer therapy, and Starry (based in both Boston and New York), which is deploying the world’s first millimeter wave band active phased array technology for consumer broadband.

Other sectors saw some jumbo-sized rounds too, including enterprise software, 3D printing and even apparel.

Boston also benefits from the rise of supergiant funding rounds. A plethora of rounds raised at $100 million or more fueled the city’s rise in the venture funding rankings. So far this year, at least 15 Massachusetts companies have raised rounds of that magnitude or more, compared to 12 in all of 2017.

Exits are happening, too

Boston companies are going public and getting acquired at a brisk pace too this year, and often for big sums.

At least seven metro-area startups have sold for $100 million or more in disclosed-price acquisitions this year, according to Crunchbase data. In the lead is online prescription drug service PillPack . The second-biggest deal was Kensho, a provider of analytics for big financial institutions that sold to S&P Global for $550 million.

IPOs are huge, too. A total of 17 Boston-area venture-backed companies have gone public so far this year, of which 15 are life science startups. The largest offering was for Rubius Therapeutics, a developer of red cell therapeutics, followed by cybersecurity provider Carbon Black.

Meanwhile, many local companies that went public in the past few years have since seen their values skyrocket. Bartlett points to examples including online retailer Wayfair (market cap of $10 billion), marketing platform HubSpot (market cap $4.8 billion) and enterprise software provider Demandware (sold to Salesforce for $2.8 billion).

New England heats up

Recollections of a frigid April sojourn in Massachusetts are too fresh for me to comfortably utter the phrase “Boston is hot.” However, speaking purely about startup funding, and putting weather aside, the Boston scene does appear to be seeing some real escalation in temperature.

Of course, it’s not just Boston. Supergiant venture funds are surging all over the place this year. Morris is even bullish on the arch-rival a few hours south: “New York and Boston love to hate each other. But New York’s doing some amazing things too,” he said, pointing to efforts to invigorate the biotech startup ecosystem.

Still, so far, it seems safe to say 2018 is shaping up as Boston’s year for startups.


Source: The Tech Crunch

Read More

After twenty years of Salesforce, what Marc Benioff got right and wrong about the cloud

Posted by on Jun 17, 2018 in Adobe, Amazon, Amazon Web Services, Atlassian, AWS, bigid, CIO, cloud applications, cloud computing, cloud-native computing, Column, computing, CRM, digitalocean, Dropbox, Edward Snowden, enterprise software, European Union, Facebook, Getty-Images, github enterprise, Google, hipchat, Infrastructure as a Service, iPhone, Marc Benioff, Microsoft, open source software, oracle, oracle corporation, Packet, RAM, SaaS, Salesforce, salesforce.com, slack, software as a service, software vendors, TC, United States, web services | 6 comments

As we enter the 20th year of Salesforce, there’s an interesting opportunity to reflect back on the change that Marc Benioff created with the software-as-a-service (SaaS) model for enterprise software with his launch of Salesforce.com.

This model has been validated by the annual revenue stream of SaaS companies, which is fast approaching $100 billion by most estimates, and it will likely continue to transform many slower-moving industries for years to come.

However, for the cornerstone market in IT — large enterprise-software deals — SaaS represents less than 25 percent of total revenue, according to most market estimates. This split is even evident in the most recent high profile “SaaS” acquisition of GitHub by Microsoft, with over 50 percent of GitHub’s revenue coming from the sale of their on-prem offering, GitHub Enterprise.  

Data privacy and security is also becoming a major issue, with Benioff himself even pushing for a U.S. privacy law on par with GDPR in the European Union. While consumer data is often the focus of such discussions, it’s worth remembering that SaaS providers store and process an incredible amount of personal data on behalf of their customers, and the content of that data goes well beyond email addresses for sales leads.

It’s time to reconsider the SaaS model in a modern context, integrating developments of the last nearly two decades so that enterprise software can reach its full potential. More specifically, we need to consider the impact of IaaS and “cloud-native computing” on enterprise software, and how they’re blurring the lines between SaaS and on-premises applications. As the world around enterprise software shifts and the tools for building it advance, do we really need such stark distinctions about what can run where?

Source: Getty Images/KTSDESIGN/SCIENCE PHOTO LIBRARY

The original cloud software thesis

In his book, Behind the Cloud, Benioff lays out four primary reasons for the introduction of the cloud-based SaaS model:

  1. Realigning vendor success with customer success by creating a subscription-based pricing model that grows with each customer’s usage (providing the opportunity to “land and expand”). Previously, software licenses often cost millions of dollars and were paid upfront, each year after which the customer was obligated to pay an additional 20 percent for support fees. This traditional pricing structure created significant financial barriers to adoption and made procurement painful and elongated.
  2. Putting software in the browser to kill the client-server enterprise software delivery experience. Benioff recognized that consumers were increasingly comfortable using websites to accomplish complex tasks. By utilizing the browser, Salesforce avoided the complex local client installation and allowed its software to be accessed anywhere, anytime and on any device.
  3. Sharing the cost of expensive compute resources across multiple customers by leveraging a multi-tenant architecture. This ensured that no individual customer needed to invest in expensive computing hardware required to run a given monolithic application. For context, in 1999 a gigabyte of RAM cost about $1,000 and a TB of disk storage was $30,000. Benioff cited a typical enterprise hardware purchase of $385,000 in order to run Siebel’s CRM product that might serve 200 end-users.
  4. Democratizing the availability of software by removing the installation, maintenance and upgrade challenges. Drawing from his background at Oracle, he cited experiences where it took 6-18 months to complete the installation process. Additionally, upgrades were notorious for their complexity and caused significant downtime for customers. Managing enterprise applications was a very manual process, generally with each IT org becoming the ops team executing a physical run-book for each application they purchased.

These arguments also happen to be, more or less, that same ones made by infrastructure-as-a-service (IaaS) providers such as Amazon Web Services during their early days in the mid-late ‘00s. However, IaaS adds value at a layer deeper than SaaS, providing the raw building blocks rather than the end product. The result of their success in renting cloud computing, storage and network capacity has been many more SaaS applications than ever would have been possible if everybody had to follow the model Salesforce did several years earlier.

Suddenly able to access computing resources by the hour—and free from large upfront capital investments or having to manage complex customer installations—startups forsook software for SaaS in the name of economics, simplicity and much faster user growth.

Source: Getty Images

It’s a different IT world in 2018

Fast-forward to today, and in some ways it’s clear just how prescient Benioff was in pushing the world toward SaaS. Of the four reasons laid out above, Benioff nailed the first two:

  • Subscription is the right pricing model: The subscription pricing model for software has proven to be the most effective way to create customer and vendor success. Years ago already, stalwart products like Microsoft Office and the Adobe Suite  successfully made the switch from the upfront model to thriving subscription businesses. Today, subscription pricing is the norm for many flavors of software and services.
  • Better user experience matters: Software accessed through the browser or thin, native mobile apps (leveraging the same APIs and delivered seamlessly through app stores) have long since become ubiquitous. The consumerization of IT was a real trend, and it has driven the habits from our personal lives into our business lives.

In other areas, however, things today look very different than they did back in 1999. In particular, Benioff’s other two primary reasons for embracing SaaS no longer seem so compelling. Ironically, IaaS economies of scale (especially once Google and Microsoft began competing with AWS in earnest) and software-development practices developed inside those “web scale” companies played major roles in spurring these changes:

  • Computing is now cheap: The cost of compute and storage have been driven down so dramatically that there are limited cost savings in shared resources. Today, a gigabyte of RAM is about $5 and a terabyte of disk storage is about $30 if you buy them directly. Cloud providers give away resources to small users and charge only pennies per hour for standard-sized instances. By comparison, at the same time that Salesforce was founded, Google was running on its first data center—with combined total compute and RAM comparable to that of a single iPhone X. That is not a joke.
  • Installing software is now much easier: The process of installing and upgrading modern software has become automated with the emergence of continuous integration and deployment (CI/CD) and configuration-management tools. With the rapid adoption of containers and microservices, cloud-native infrastructure has become the de facto standard for local development and is becoming the standard for far more reliable, resilient and scalable cloud deployment. Enterprise software packed as a set of Docker containers orchestrated by Kubernetes or Docker Swarm, for example, can be installed pretty much anywhere and be live in minutes.

Sourlce: Getty Images/ERHUI1979

What Benioff didn’t foresee

Several other factors have also emerged in the last few years that beg the question of whether the traditional definition of SaaS can really be the only one going forward. Here, too, there’s irony in the fact that many of the forces pushing software back toward self-hosting and management can be traced directly to the success of SaaS itself, and cloud computing in general:

  1. Cloud computing can now be “private”: Virtual private clouds (VPCs) in the IaaS world allow enterprises to maintain root control of the OS, while outsourcing the physical management of machines to providers like Google, DigitalOcean, Microsoft, Packet or AWS. This allows enterprises (like Capital One) to relinquish hardware management and the headache it often entails, but retain control over networks, software and data. It is also far easier for enterprises to get the necessary assurance for the security posture of Amazon, Microsoft and Google than it is to get the same level of assurance for each of the tens of thousands of possible SaaS vendors in the world.
  2. Regulations can penalize centralized services: One of the underappreciated consequences of Edward Snowden’s leaks, as well as an awakening to the sometimes questionable data-privacy practices of companies like Facebook, is an uptick in governments and enterprises trying to protect themselves and their citizens from prying eyes. Using applications hosted in another country or managed by a third party exposes enterprises to a litany of legal issues. The European Union’s GDPR law, for example, exposes SaaS companies to more potential liability with each piece of EU-citizen data they store, and puts enterprises on the hook for how their SaaS providers manage data.
  3. Data breach exposure is higher than ever: A corollary to the point above is the increased exposure to cybercrime that companies face as they build out their SaaS footprints. All it takes is one employee at a SaaS provider clicking on the wrong link or installing the wrong Chrome extension to expose that provider’s customers’ data to criminals. If the average large enterprise uses 1,000+ SaaS applications and each of those vendors averages 250 employees, that’s an additional 250,000 possible points of entry for an attacker.
  4. Applications are much more portable: The SaaS revolution has resulted in software vendors developing their applications to be cloud-first, but they’re now building those applications using technologies (such as containers) that can help replicate the deployment of those applications onto any infrastructure. This shift to what’s called cloud-native computing means that the same complex applications you can sign up to use in a multi-tenant cloud environment can also be deployed into a private data center or VPC much easier than previously possible. Companies like BigID, StackRox, Dashbase and others are taking a private cloud-native instance first approach to their application offerings. Meanwhile SaaS stalwarts like Atlassian, Box, Github and many others are transitioning over to Kubernetes driven, cloud-native architectures that provide this optionality in the future.  
  5. The script got flipped on CIOs: Individuals and small teams within large companies now drive software adoption by selecting the tools (e.g., GitHub, Slack, HipChat, Dropbox), often SaaS, that best meet their needs. Once they learn what’s being used and how it’s working, CIOs are faced with the decision to either restrict network access to shadow IT or pursue an enterprise license—or the nearest thing to one—for those services. This trend has been so impactful that it spawned an entirely new category called cloud access security brokers—another vendor that needs to be paid, an additional layer of complexity, and another avenue for potential problems. Managing local versions of these applications brings control back to the CIO and CISO.

Source: Getty Images/MIKIEKWOODS

The future of software is location agnostic

As the pace of technological disruption picks up, the previous generation of SaaS companies is facing a future similar to the legacy software providers they once displaced. From mainframes up through cloud-native (and even serverless) computing, the goal for CIOs has always been to strike the right balance between cost, capabilities, control and flexibility. Cloud-native computing, which encompasses a wide variety of IT facets and often emphasizes open source software, is poised to deliver on these benefits in a manner that can adapt to new trends as they emerge.

The problem for many of today’s largest SaaS vendors is that they were founded and scaled out during the pre-cloud-native era, meaning they’re burdened by some serious technical and cultural debt. If they fail to make the necessary transition, they’ll be disrupted by a new generation of SaaS companies (and possibly traditional software vendors) that are agnostic toward where their applications are deployed and who applies the pre-built automation that simplifies management. This next generation of vendors will more control in the hands of end customers (who crave control), while maintaining what vendors have come to love about cloud-native development and cloud-based resources.

So, yes, Marc Benioff and Salesforce were absolutely right to champion the “No Software” movement over the past two decades, because the model of enterprise software they targeted needed to be destroyed. In the process, however, Salesforce helped spur a cloud computing movement that would eventually rewrite the rules on enterprise IT and, now, SaaS itself.


Source: The Tech Crunch

Read More