Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

EC-exclusive interview with Tim Cook, Slacklash, and tech inclusion

Posted by on May 11, 2019 in Amazon Web Services, app developers, Chanda Prescod-Weinstein, Deezer, Geoff Cook, Google, Groupon, IBM, Kate Clark, kidbox, Matthew Panzarino, Microsoft, om malik, San Francisco, The Extra Crunch Daily, Tim Cook, Travis Kalanick, True Ventures, Uber, WeWork | 0 comments

An EC-exclusive interview with Apple CEO Tim Cook

TechCrunch editor-in-chief Matthew Panzarino traveled to Florida this week to talk with Tim Cook about Apple’s developer education initiatives and also meet with high school developer Liam Rosenfeld of Lyman High School. Apple wants to attract the next set of app developers like Liam into the Xcode world, and the company is building a more ambitious strategy to do so going forward:

But that conversation with Liam does bring up some questions, and I ask Cook whether the thinks that there are more viable pathways to coding, especially for people with non-standard education or backgrounds.

“I don’t think a four year degree is necessary to be proficient at coding,” says Cook. “I think that’s an old, traditional view. What we found out is that if we can get coding in in the early grades and have a progression of difficulty over the tenure of somebody’s high school years, by the time you graduate kids like Liam, as an example of this, they’re already writing apps that could be put on the App Store.”

Against the Slacklash

TechCrunch columnist Jon Evans often writes on developer tools and productivity (see, for example, his Extra Crunch overview of the headless CMS space). Now, he sets his sights on Slack, and finds the product … much better and more productive than many would have you believe, and offers tips for maximizing its value:


Source: The Tech Crunch

Read More

Vizion.ai launches its managed Elasticsearch service

Posted by on Mar 28, 2019 in Amazon Web Services, api, Artificial Intelligence, Caching, cloud computing, computing, Developer, Elastic, Elasticsearch, Enterprise, ML, TC, world wide web | 0 comments

Setting up Elasticsearch, the open-source system that many companies large and small use to power their distributed search and analytics engines, isn’t the hardest thing. What is very hard, though, is to provision the right amount of resources to run the service, especially when your users’ demand comes in spikes, without overpaying for unused capacity. Vizion.ai’s new Elasticsearch Service does away with all of this by essentially offering Elasticsearch as a service and only charging its customers for the infrastructure they use.

Vizion.ai’s service automatically scales up and down as needed. It’s a managed service and delivered as a SaaS platform that can support deployments on both private and public clouds, with full API compatibility with the standard Elastic stack that typically includes tools like Kibana for visualizing data, Beats for sending data to the service and Logstash for transforming the incoming data and setting up data pipelines. Users can easily create several stacks for testing and development, too, for example.

Vizion.ai GM and VP Geoff Tudor

“When you go into the AWS Elasticsearch service, you’re going to be looking at dozens or hundreds of permutations for trying to build your own cluster,” Vision.ai’s VP and GM Geoff Tudor told me. “Which instance size? How many instances? Do I want geographical redundancy? What’s my networking? What’s my security? And if you choose wrong, then that’s going to impact the overall performance. […] We do balancing dynamically behind that infrastructure layer.” To do this, the service looks at the utilization patterns of a given user and then allocates resources to optimize for the specific use case.

What VVizion.ai hasdone here is take some of the work from its parent company Panzura, a multi-cloud storage service for enterprises that has plenty of patents around data caching, and applied it to this new Elasticsearch service.

There are obviously other companies that offer commercial Elasticsearch platforms already. Tudor acknowledges this, but argues that his company’s platform is different. With other products, he argues, you have to decide on the size of your block storage for your metadata upfront, for example, and you typically want SSDs for better performance, which can quickly get expensive. Thanks to Panzura’s IP, Vizion.ai is able to bring down the cost by caching recent data on SSDs and keeping the rest in cheaper object storage pools.

He also noted that the company is positioning the overall Vizion.ai service, with the Elasticsearch service as one of the earliest components, as a platform for running AI and ML workloads. Support for TensorFlow, PredictionIO (which plays nicely with Elasticsearch) and other tools is also in the works. “We want to make this an easy serverless ML/AI consumption in a multi-cloud fashion, where not only can you leverage the compute, but you can also have your storage of record at a very cost-effective price point.”


Source: The Tech Crunch

Read More

With $90 million in funding, the Ginkgo spinoff Motif joins the fight for the future of food

Posted by on Feb 26, 2019 in Amazon Web Services, bEYOND meat, Bill Gates, biotechnology, Breakthrough Energy Ventures, Chief Operating Officer, Co-founder, Food, food and drink, Ginkgo Bioworks, head, Impossible foods, jack ma, Jason Kelly, jeff bezos, John Doerr, manufacturing, Marc Benioff, Masayoshi Son, meat, meat substitutes, meg whitman, michael bloomberg, monsanto, partner, protein, Reid Hoffman, richard branson, TC, Tyson Foods, Vinod Khosla, web services | 0 comments

Continuing its quest to become the Amazon Web Services for biomanufacturing, <a href=”http://ginkgobioworks.com/”>Ginkgo Bioworks has launched a new spinoff called Motif Ingredients with $90 million in funding to develop proteins that can serve as meat and dairy replacements.

It’s the second spinout for Ginkgo since late 2017 when the company partnered with Bayer to launch Joyn Bio, a startup researching and developing bacteria that could improve crop yields.

Now, with Motif, Ginkgo is tackling the wild world of protein replacements for the food and beverage industry through the spinoff of Motif Ingredients.

It’s a move that’s likely going to send shockwaves through several of the alternative meat and dairy companies that were using Ginkgo as their manufacturing partner in their quest to reduce the demand for animal husbandry — a leading contributor to global warming — through the development of protein replacements.

“To help feed the world and meet consumers’ evolving food preferences, traditional and complementary nutritional sources need to co-exist. As a global dairy nutrition company, we see plant- and fermentation-produced nutrition as complementary to animal protein, and in particular cows’ milk,” said Judith Swales, the Chief Operating Officer, for the Global Consumer and Foodservice Business, of Fonterra, an investor in Ginkgo’s new spinout.

To ensure the success of its new endeavor Ginkgo has raised $90 million in financing from industry insiders like Fonterra and the global food processing and trading firm Louis Dreyfus Co., while also tapping the pool of deep-pocketed investors behind Breakthrough Energy Ventures, the climate focused investment fund financed by a global gaggle of billionaires including Marc Benioff, Jeff Bezos, Michael Bloomberg, Richard Branson, Bill Gates, Reid Hoffman, John Doerr, Vinod Khosla, Jack Ma, Neil Shen, Masayoshi Son, and Meg Whitman.

Leading Ginkgo’s latest spinout is a longtime veteran of the food and beverage industry, Jonathan McIntyre, the former head of research and development at another biotechnology startup focused on agriculture — Indigo Ag.

McIntyre, who left Indigo just two years after being named the company’s head of research and development, previously had stints at Monsanto, Nutrasweet, and PepsiCo (in both its beverage and snack divisions).

“There’s an opportunity to produce proteins,” says McIntyre. “Right now as population grows the protein supply is going to be challenged. Motif gives the ability to create proteins and make products from low cost available genetic material.”

Photo: paylessimages/iStock

Ginkgo, which will have a minority stake in the new company, will provide engineering and design work to Motif and provide some initial research and development work on roughly six to nine product lines.

That push, with the financing, and Ginkgo’s backing as the manufacturer of new proteins for Motif Ingredients should put the company in a comfortable position to achieve McIntyre’s goals of bringing his company’s first products into the market within the next two years. All Motif has to pay is cost plus slight overhead for the Ginkgo ingredients.

“We started putting Motif together around February or March of 2018,” says Ginkgo co-founder Jason Kelly of the company’s plans. “The germination of the business had its inception earlier though, from interacting with companies in the food and beverage scene. When we talked to these companies the strong sense we got was if there had been a trusted provider of outsourced protein development they would have loved to work with us.”

The demand from consumers for alternative sources of protein and dairy — that have the same flavor profiles as traditional dairy and meats — has reached an inflection point over the past few years. Certainly venture capital interest into the industry has soared along with the appetite from traditional protein purveyors like Danone, Tyson Foods, and others to take a bite out of the market.

Some industry insiders think it was Danone’s 2016 acquisition of WhiteWave in a $12.5 billion deal that was the signal which brought venture investors and food giants alike flocking to startups that were developing meat and dairy substitutes. The success of companies like Beyond Meat and Impossible Foods has only served to prove that a growing market exists for these substitutes.

At the same time, solving the problem of protein for a growing global population is critical if the world is going to reverse course on climate change. Agriculture and animal husbandry are huge contributors to the climate crisis and ones for which no solution has made it to market.

Investors think cultured proteins — fermented in tanks like brewing beer — could be an answer.

Photograph: David Parry/EPA

“Innovative or disruptive solutions are key to responding to changing consumer demand and to addressing the challenge of feeding a growing world population sustainably,” said Kristen Eshak Weldon, Head of Food Innovation & Downstream Strategy at Louis Dreyfus Company (LDC), a leading merchant and processor of agricultural goods. “In this sense, we are excited to partner with Motif, convinced that its next-generation ingredients will play a vital role.”

Breakthrough Energy Ventures certainly thinks so.

The investment firm has been busy placing bets across a number of different biologically based solutions to reduce the emissions associated with agriculture and cultivation. Pivot Bio is a startup competing with Ginkgo’s own Joyn Bio to create nitrogen fixing techniques for agriculture. And earlier this month, the firm invested as part of a $33 million round for Sustainable Bioproducts, which is using a proprietary bacteria found in a remote corner of Yellowstone National Park to make its own protein substitute.

For all of these companies, the goal is nothing less than providing a commercially viable technology to combat some of the causes of climate change in a way that’s appealing to the average consumer.

“Sustainability and accessible nutrition are among the biggest challenges facing the food industry today. Consumers are demanding mindful food options, but there’s a reigning myth that healthy and plant-based foods must come at a higher price, or cannot taste or function like the animal-based foods they aim to replicate,” said McIntyre, in a statement. “Biotechnology and fermentation is our answer, and Motif will be key to propelling the next food revolution with affordable, sustainable and accessible ingredients that meet the standards of chefs, food developers, and visionary brands.”


Source: The Tech Crunch

Read More

Ousted Flipkart founder Binny Bansal aims to help 10,000 Indian founders with new venture

Posted by on Feb 5, 2019 in Amazon Web Services, Asia, binny Bansal, ceo, Co-founder, Companies, computing, E-Commerce, executive, Flipkart, India, online payments, Sachin Bansal, Startup company, United States, Walmart, web services | 0 comments

Flipkart co-founder Binny Bansal’s next act is aimed at helping the next generation of startup founders in India.

Bansal has already etched his name into India’s startup history after U.S. retail giant Walmart paid $16 billion to take a majority stake in its e-commerce business to expand its rivalry with Amazon. Things turned sour, however, when he resigned months after the deal’s completion due to an investigation into “serious personal misconduct.”

In 2019, 37-year-old Bansal is focused on his newest endeavor, xto10x Technologies, a startup consultancy that he founded with former colleague Saikiran Krishnamurthy. The goal is to help startup founders on a larger scale than the executive could ever do on his own.

“Person to person, I can help 10 startups but the ambition is to help 10,000 early and mid-stage entrepreneurs, not 10,” Bansal told Bloomberg in an interview.

Bansal, who started Flipkart in 2007 with Sachin Bansal (no relation) and still retains a four percent share, told Bloomberg that India-based founders are bereft of quality consultancy and software services to handle growth and company building.

“Today, software is built for large enterprises and not small startups,” he told the publication. “Think of it as solving for startups what Amazon Web Services has done for computing, helping enterprises go from zero to a thousand servers overnight with no hassle.”

“Instead of making a thousand mistakes, if we can help other startups make a hundred or even few hundred, that would be worth it,” Bansal added.

Bansal served as Flipkart’s CEO from 2007 to 2016 before becoming CEO of the Flipkart Group. He declined to go into specifics of the complaint against him at Flipkart — which reports suggest came about from a consensual relationship with a female employee — and, of the breakdown of his relationship with Sachin Bansal, he said he’s moved on to new things.

It isn’t just xto10x Technologies that is keeping him busy. Bansal is involved in investment firm 021 Capital where he is the lead backer following a $50 million injection. Neither role at the two companies involves day-to-day operations, Bloomberg reported, but, still, Bansal is seeding his money and experience to shape the Indian startup ecosystem.


Source: The Tech Crunch

Read More

AWS launches Arm-based servers for EC2

Posted by on Nov 27, 2018 in Amazon Web Services, amd, ARM, AWS, AWS re:Invent 2018, Cloud, cloud computing, Developer, linux, operating system, operating systems, TC, Ubuntu, web servers | 0 comments

At its re:Invent conference in Las Vegas, AWS today announced the launch of Arm-based servers for its EC2 cloud computing service. These aren’t run-of-the-mill Arm chips, though. AWS took the standard Arm cores and then customized them to fit its needs.The company says that its so-called AWS Graviton Processors have been optimized for performance and cost, with a focus on scale-out workloads that can be spread across a number of smaller instances (think containerized microservices, web servers, caching fleets, etc.).

The first set of instances, called A1, is now available in a number of AWS regions in the U.S. and Europe. They support all of AWS’s standard instance pricing models, including on-demand, reserved instance, spot instance, dedicated instance and dedicated host.

For now, you can only use Amazon Linux 2, RHEL and Ubuntu as operating systems for these machines, but AWS promises that additional operating system support will launch in the future.

Because these are ARM servers, you’ll obviously have to recompile any native code for them before you can run your applications on them. Virtually any application that is written in a scripting language, though, will probably run without any modifications.

Prices for these instances start at $0.0255/hour for an a1.medium machine with 1 CPU and 2 GiB of RAM and go up to $0.4080/hour for machines with 16 CPUs and 32 GiB of RAM. That’s maybe not as cheap as you would’ve expected given that an X86-based t3.nano server starts at $0.0052/hour, but you can always save quite a bit by using spot instances, of course. Until we see some benchmarks, though, it’s hard to compare these different machine types anyway.

As Amazon’s Jeff Barr notes in today’s announcement, the company’s move to its so-called Nitro System now allows it to launch new instance types at a faster clip. Nitro essentially provides the building blocks for creating new instance types that the team can then mix and match as needed.

It’s worth noting that AWS also launched support for AMD EPYC processors earlier this month.

more AWS re:Invent 2018 coverage


Source: The Tech Crunch

Read More

AWS Transit Gateway helps customers understand their entire network

Posted by on Nov 27, 2018 in Amazon Web Services, AWS re:Invent 2018, Cloud, Enterprise, Networking, TC | 0 comments

Tonight at AWS re:Invent, the company announced a new tool called AWS Transit Gateway designed to help build a network topology inside of AWS that lets you share resources across accounts and bring together on premises and cloud resources in a single network topology.

Amazon already has a popular product called Amazon Virtual Private Cloud (VPC), which helps customers build private instances of their applications. The Transit Gateway is designed to help build connections between VPCs, which up until now has been tricky to do.

As Peter DeSantis, VP of global infrastructure and customer support at AWS speaking at an event Monday night at AWS Re:Invent explained, AWS Transit Gateway gives you a single set of controls that lets you connect to a centrally managed gateway to grow your network easily and quickly.

Diagram: AWS

DeSantis said that this tool also gives you the ability to traverse your AWS and on-premises networks. “A gateway is another way that we’re innovating to enable customers to have secure, easy-to-manage networking across both on premise and their AWS cloud environment,” he explained.

AWS Transit Gateway lets you build connections across a network wherever the resources live in a standard kind of network topology. “Today we are giving you the ability to use the new AWS Transit Gateway to build a hub-and-spoke network topology. You can connect your existing VPCs, data centers, remote offices, and remote gateways to a managed Transit Gateway, with full control over network routing and security, even if your VPCs, Active Directories, shared services, and other resources span multiple AWS accounts,” Amazon’s Jeff Barr wrote in a blog post announcing to the new feature.

For much of its existence, AWS was about getting you to the cloud and managing your cloud resources. This makes sense for a pure cloud company like AWS, but customers tend to have complex configurations with some infrastructure and software still living on premises and some in the cloud. This could help bridge the two worlds.

more AWS re:Invent 2018 coverage


Source: The Tech Crunch

Read More

AWS Global Accelerators helps customers manage traffic across zones

Posted by on Nov 27, 2018 in Amazon Web Services, AWS re:Invent 2018, Cloud, edge computing, Enterprise, Networking, TC | 0 comments

Many AWS customers have to run in multiple zones for many reasons including performance requirements, regulatory issues or fail-over management. Whatever the reason, AWS announced a new tool tonight called Global Accelerators designed to help customers route traffic more easily across multiple regions.

Peter DeSantis, VP of global infrastructure and customer support at AWS speaking at an event Monday night at AWS Re:Invent explained that much of AWS customer traffic already flows over their massive network, and customers are using AWS Direct Connect to help applications get consistent performance and low network variability as customers move between AWS regions. He said what has been missing is a way to use the AWS global network to optimize their applications.

“Tonight I’m excited to announce AWS Global Accelerator. AWS Global Accelerator makes it easy for you to improve the performance and availability of your applications by taking advantage of the AWS global network,” he told the AWS re:Invent audience.

Graphic: AWS

“Your customer traffic is routed from your end users to the closest AWS edge location and from there traverses congestion-free redundant, highly available AWS global network. In addition to improving performance AWS Global Accelerator has built-in fault isolation, which instantly reacts to changes in the network health or your applications configuration,” DeSantis explained.

In fact, network administrators can route traffic based on defined policies such as health or geographic requirements and the traffic will move to the designated zone automatically based on those policies.

AWS plans to charge customers based on the number of accelerators they create. “An accelerator is the resource you create to direct traffic to optimal endpoints over the AWS global network. Customers will typically set up one accelerator for each application, but more complex applications may require more than one accelerator,” AWS’s Shaun Ray wrote in a blog post announcing the new feature.

AWS Global Accelerator is available today in several regions in the US, Europe and Asia.

more AWS re:Invent 2018 coverage


Source: The Tech Crunch

Read More

VMware pulls AWS’s Relational Database Service into the data center

Posted by on Aug 27, 2018 in Amazon Web Services, Andy Jassy, ceo, cloud computing, computing, Microsoft, mysql, oracle, postgresql, relational database, TC, vmware | 0 comments

Here’s some unusual news: AWS, Amazon’s cloud computing arm, today announced that it plans to bring its Relational Database Service (RDS) to VMware, no matter whether that’s VMware Cloud on AWS or a privately hosted VMware deployment in a corporate data center.

While some of AWS’s competitors have long focused on these kinds of hybrid cloud deployments, AWS never really put the same kind of emphasis on this. Clearly, though, that’s starting to change — maybe in part because Microsoft and others are doing quite well in this space.

“Managing the administrative and operational muck of databases is hard work, error-prone and resource intensive,” said AWS CEO Andy Jassy . “It’s why hundreds of thousands of customers trust Amazon RDS to manage their databases at scale. We’re excited to bring this same operationally battle-tested service to VMware customers’ on-premises and hybrid environments, which will not only make database management much easier for enterprises, but also make it simpler for these databases to transition to the cloud.”

With Amazon RDS on VMware, enterprises will be able to use AWS’s technology to run and manage Microsoft SQL Server, Oracle, PostgreSQL, MySQL and MariaDB databases in their own data centers. The idea here, AWS says, is to make it easy for enterprises to set up and manage their databases wherever they want to host their data — and to then migrate it to AWS when they choose to do so.

This new service will soon be in private preview, so we don’t know all that much about how this will work in practice or what it will cost. AWS promises, however, that the experience will pretty much be the same as in the cloud and that RDS on VMware will handle all the updates and patches automatically.

Today’s announcement comes about two years after the launch of VMware Cloud on AWS, which was pretty much the reverse of today’s announcement. With VMware Cloud on AWS, enterprises can take their existing VMware deployments and take them to AWS.


Source: The Tech Crunch

Read More

Crypto and venture’s biggest names are backing a new distributed ledger project called Oasis Labs

Posted by on Jul 9, 2018 in Amazon Web Services, blockchain, blockchains, California, Co-founder, coinbase, cryptocurrencies, cryptocurrency, cryptography, distributed computing, Distributed Ledger, ethereum, foundation capital, guggenheim, MIT, smart contract, software development, TC, Uber, University of California, university of california berkeley, Venture Capital, web services | 0 comments

A team of top security researchers from the University of California, Berkeley and MIT have come together to launch a new cryptographic project that combines secure software and hardware to enable privacy-preserving smart contracts under the banner of Oasis Labs.

That vision, which is being marketed as the baby of a union between Ethereum and Amazon Web Services, has managed to attract $45 million in pre-sale financing from some of the biggest names in venture capital and cryptocurrency investing.

The chief architect of the project (and chief executive of Oasis Labs) is University of Berkeley Professor Dawn Song, a security expert who first came to prominence in 2009 when she was named one of as one of MIT Technology Review’s Innovators under 35. Song’s rise in the security world was capped with both a MacArthur Fellowship and a Guggenheim Award for her work on security technologies. But it’s the more recent work that she’s been doing around hardware and software development in conjunction with other Berkeley researchers like her postdoctoral associate, Raymond Cheng, that grabbed investors attention.

Through the Keystone enclave hardware project, Song and Cheng worked with MIT researchers and professors like Srini Devadas and Ilia Lebedev on technology to secure sensitive data on the platform.

“We use a combination of trusted hardware and cryptographic techniques (such as secure multiparty computation) to enable smart contracts to compute over this encrypted data, without revealing anything about the underlying data. This is like doing computation inside a black box, which only outputs the computation result without showing what’s inside the black box,” Song wrote to me in an email. “In addition to supporting existing trusted hardware implementations, we are also working on a fully open source trusted hardware enclave implementation; a project we call Keystone. We also have years of experience building differential privacy tools, which are now being used in production at Uber for their data privacy initiatives. We plan to incorporate such techniques into our smart contract platform to further provide privacy and protect the computation output from leaking sensitive information about inputs.”

Song says that her project has solved the scaling problem by separating execution from consensus.

For each smart contract execution, we randomly select a subset of the computation nodes to form a computation committee, using a proof of stake mechanism. The computation committee executes the smart contract transaction,” Song wrote in an email exchange with TechCrunch. “The consensus committee then verifies the correctness of the computation results from the computation committee. We use different mathematical and cryptographic methods to enable efficient verification of the correctness of the computation results. Once the verification succeeds, the state transition is committed to the distributed ledger by the consensus committee.”

By having the computation committee working in parallel with the consensus committee only needing to verify the correctness of the computation creates an easier path to scalability.

Other platforms have attempted to use sampling to speed up transactions over distributed systems (Hedera Hashgraph comes to mind), but have been met with limited adoption in the market.

“We use proof-of-stake mechanisms to elect instances of different types of functional committees: compute, storage and consensus committees,” Song explained. “We can scale each of the different functions independently based on workload and system needs. One of our observations of existing systems is that consensus operations are very expensive. our network protocol design allows compute committees and storage committees to process transactions without relying on heavy-weight consensus protocols.”

Song’s approach has managed to gain the support of firms including: a16zcrypto, Accel, Binance, DCVC (Data Collective), Electric Capital, Foundation Capital, Metastable, Pantera, Polychain, and more.

In all, some 75 investors have rallied to finance the company’s approach to securing data and selling compute power on a cryptographically secured ledger.

“It’s exciting to see talented people like Dawn and her team working on ways to transition the internet away from data silos and towards a world with more responsible ways to share and own your data,” said Fred Ehrsam, co-founder of Coinbase and Oasis Labs investor, in a statement.

“The next step is getting our product in the hands of developers who align with our mission and can help inform the evolution of the platform as they build applications upon it,” said Oasis Labs co-founder and CTO Raymond Cheng in a statement.

For potential customers who’d eventually use the smart contracts developed on Oasis’ platform the system would work much like the method established by Ethereum.

“The token usage model in Oasis is very similar to Ethereum, where users pay gas fee to miners for executing smart contracts,” Song wrote. “One just needs one token to pay for gas fee for executing smart contracts. As with Ethereum, in our platform storage and compute have different pricing models but they both are paid with the same token.”

And Oasis’ leadership is looking ahead to a marketplace that incentivizes scale and makes fees accessible. “If the token price goes up, the amount of tokens needed to pay for operations can decrease (this is similar to Ethereum’s gas price, which is independent from the price of Ether). The number of tokens needed to pay for smart contract execution is not fixed.”


Source: The Tech Crunch

Read More

After twenty years of Salesforce, what Marc Benioff got right and wrong about the cloud

Posted by on Jun 17, 2018 in Adobe, Amazon, Amazon Web Services, Atlassian, AWS, bigid, CIO, cloud applications, cloud computing, cloud-native computing, Column, computing, CRM, digitalocean, Dropbox, Edward Snowden, enterprise software, European Union, Facebook, Getty-Images, github enterprise, Google, hipchat, Infrastructure as a Service, iPhone, Marc Benioff, Microsoft, open source software, oracle, oracle corporation, Packet, RAM, SaaS, Salesforce, salesforce.com, slack, software as a service, software vendors, TC, United States, web services | 6 comments

As we enter the 20th year of Salesforce, there’s an interesting opportunity to reflect back on the change that Marc Benioff created with the software-as-a-service (SaaS) model for enterprise software with his launch of Salesforce.com.

This model has been validated by the annual revenue stream of SaaS companies, which is fast approaching $100 billion by most estimates, and it will likely continue to transform many slower-moving industries for years to come.

However, for the cornerstone market in IT — large enterprise-software deals — SaaS represents less than 25 percent of total revenue, according to most market estimates. This split is even evident in the most recent high profile “SaaS” acquisition of GitHub by Microsoft, with over 50 percent of GitHub’s revenue coming from the sale of their on-prem offering, GitHub Enterprise.  

Data privacy and security is also becoming a major issue, with Benioff himself even pushing for a U.S. privacy law on par with GDPR in the European Union. While consumer data is often the focus of such discussions, it’s worth remembering that SaaS providers store and process an incredible amount of personal data on behalf of their customers, and the content of that data goes well beyond email addresses for sales leads.

It’s time to reconsider the SaaS model in a modern context, integrating developments of the last nearly two decades so that enterprise software can reach its full potential. More specifically, we need to consider the impact of IaaS and “cloud-native computing” on enterprise software, and how they’re blurring the lines between SaaS and on-premises applications. As the world around enterprise software shifts and the tools for building it advance, do we really need such stark distinctions about what can run where?

Source: Getty Images/KTSDESIGN/SCIENCE PHOTO LIBRARY

The original cloud software thesis

In his book, Behind the Cloud, Benioff lays out four primary reasons for the introduction of the cloud-based SaaS model:

  1. Realigning vendor success with customer success by creating a subscription-based pricing model that grows with each customer’s usage (providing the opportunity to “land and expand”). Previously, software licenses often cost millions of dollars and were paid upfront, each year after which the customer was obligated to pay an additional 20 percent for support fees. This traditional pricing structure created significant financial barriers to adoption and made procurement painful and elongated.
  2. Putting software in the browser to kill the client-server enterprise software delivery experience. Benioff recognized that consumers were increasingly comfortable using websites to accomplish complex tasks. By utilizing the browser, Salesforce avoided the complex local client installation and allowed its software to be accessed anywhere, anytime and on any device.
  3. Sharing the cost of expensive compute resources across multiple customers by leveraging a multi-tenant architecture. This ensured that no individual customer needed to invest in expensive computing hardware required to run a given monolithic application. For context, in 1999 a gigabyte of RAM cost about $1,000 and a TB of disk storage was $30,000. Benioff cited a typical enterprise hardware purchase of $385,000 in order to run Siebel’s CRM product that might serve 200 end-users.
  4. Democratizing the availability of software by removing the installation, maintenance and upgrade challenges. Drawing from his background at Oracle, he cited experiences where it took 6-18 months to complete the installation process. Additionally, upgrades were notorious for their complexity and caused significant downtime for customers. Managing enterprise applications was a very manual process, generally with each IT org becoming the ops team executing a physical run-book for each application they purchased.

These arguments also happen to be, more or less, that same ones made by infrastructure-as-a-service (IaaS) providers such as Amazon Web Services during their early days in the mid-late ‘00s. However, IaaS adds value at a layer deeper than SaaS, providing the raw building blocks rather than the end product. The result of their success in renting cloud computing, storage and network capacity has been many more SaaS applications than ever would have been possible if everybody had to follow the model Salesforce did several years earlier.

Suddenly able to access computing resources by the hour—and free from large upfront capital investments or having to manage complex customer installations—startups forsook software for SaaS in the name of economics, simplicity and much faster user growth.

Source: Getty Images

It’s a different IT world in 2018

Fast-forward to today, and in some ways it’s clear just how prescient Benioff was in pushing the world toward SaaS. Of the four reasons laid out above, Benioff nailed the first two:

  • Subscription is the right pricing model: The subscription pricing model for software has proven to be the most effective way to create customer and vendor success. Years ago already, stalwart products like Microsoft Office and the Adobe Suite  successfully made the switch from the upfront model to thriving subscription businesses. Today, subscription pricing is the norm for many flavors of software and services.
  • Better user experience matters: Software accessed through the browser or thin, native mobile apps (leveraging the same APIs and delivered seamlessly through app stores) have long since become ubiquitous. The consumerization of IT was a real trend, and it has driven the habits from our personal lives into our business lives.

In other areas, however, things today look very different than they did back in 1999. In particular, Benioff’s other two primary reasons for embracing SaaS no longer seem so compelling. Ironically, IaaS economies of scale (especially once Google and Microsoft began competing with AWS in earnest) and software-development practices developed inside those “web scale” companies played major roles in spurring these changes:

  • Computing is now cheap: The cost of compute and storage have been driven down so dramatically that there are limited cost savings in shared resources. Today, a gigabyte of RAM is about $5 and a terabyte of disk storage is about $30 if you buy them directly. Cloud providers give away resources to small users and charge only pennies per hour for standard-sized instances. By comparison, at the same time that Salesforce was founded, Google was running on its first data center—with combined total compute and RAM comparable to that of a single iPhone X. That is not a joke.
  • Installing software is now much easier: The process of installing and upgrading modern software has become automated with the emergence of continuous integration and deployment (CI/CD) and configuration-management tools. With the rapid adoption of containers and microservices, cloud-native infrastructure has become the de facto standard for local development and is becoming the standard for far more reliable, resilient and scalable cloud deployment. Enterprise software packed as a set of Docker containers orchestrated by Kubernetes or Docker Swarm, for example, can be installed pretty much anywhere and be live in minutes.

Sourlce: Getty Images/ERHUI1979

What Benioff didn’t foresee

Several other factors have also emerged in the last few years that beg the question of whether the traditional definition of SaaS can really be the only one going forward. Here, too, there’s irony in the fact that many of the forces pushing software back toward self-hosting and management can be traced directly to the success of SaaS itself, and cloud computing in general:

  1. Cloud computing can now be “private”: Virtual private clouds (VPCs) in the IaaS world allow enterprises to maintain root control of the OS, while outsourcing the physical management of machines to providers like Google, DigitalOcean, Microsoft, Packet or AWS. This allows enterprises (like Capital One) to relinquish hardware management and the headache it often entails, but retain control over networks, software and data. It is also far easier for enterprises to get the necessary assurance for the security posture of Amazon, Microsoft and Google than it is to get the same level of assurance for each of the tens of thousands of possible SaaS vendors in the world.
  2. Regulations can penalize centralized services: One of the underappreciated consequences of Edward Snowden’s leaks, as well as an awakening to the sometimes questionable data-privacy practices of companies like Facebook, is an uptick in governments and enterprises trying to protect themselves and their citizens from prying eyes. Using applications hosted in another country or managed by a third party exposes enterprises to a litany of legal issues. The European Union’s GDPR law, for example, exposes SaaS companies to more potential liability with each piece of EU-citizen data they store, and puts enterprises on the hook for how their SaaS providers manage data.
  3. Data breach exposure is higher than ever: A corollary to the point above is the increased exposure to cybercrime that companies face as they build out their SaaS footprints. All it takes is one employee at a SaaS provider clicking on the wrong link or installing the wrong Chrome extension to expose that provider’s customers’ data to criminals. If the average large enterprise uses 1,000+ SaaS applications and each of those vendors averages 250 employees, that’s an additional 250,000 possible points of entry for an attacker.
  4. Applications are much more portable: The SaaS revolution has resulted in software vendors developing their applications to be cloud-first, but they’re now building those applications using technologies (such as containers) that can help replicate the deployment of those applications onto any infrastructure. This shift to what’s called cloud-native computing means that the same complex applications you can sign up to use in a multi-tenant cloud environment can also be deployed into a private data center or VPC much easier than previously possible. Companies like BigID, StackRox, Dashbase and others are taking a private cloud-native instance first approach to their application offerings. Meanwhile SaaS stalwarts like Atlassian, Box, Github and many others are transitioning over to Kubernetes driven, cloud-native architectures that provide this optionality in the future.  
  5. The script got flipped on CIOs: Individuals and small teams within large companies now drive software adoption by selecting the tools (e.g., GitHub, Slack, HipChat, Dropbox), often SaaS, that best meet their needs. Once they learn what’s being used and how it’s working, CIOs are faced with the decision to either restrict network access to shadow IT or pursue an enterprise license—or the nearest thing to one—for those services. This trend has been so impactful that it spawned an entirely new category called cloud access security brokers—another vendor that needs to be paid, an additional layer of complexity, and another avenue for potential problems. Managing local versions of these applications brings control back to the CIO and CISO.

Source: Getty Images/MIKIEKWOODS

The future of software is location agnostic

As the pace of technological disruption picks up, the previous generation of SaaS companies is facing a future similar to the legacy software providers they once displaced. From mainframes up through cloud-native (and even serverless) computing, the goal for CIOs has always been to strike the right balance between cost, capabilities, control and flexibility. Cloud-native computing, which encompasses a wide variety of IT facets and often emphasizes open source software, is poised to deliver on these benefits in a manner that can adapt to new trends as they emerge.

The problem for many of today’s largest SaaS vendors is that they were founded and scaled out during the pre-cloud-native era, meaning they’re burdened by some serious technical and cultural debt. If they fail to make the necessary transition, they’ll be disrupted by a new generation of SaaS companies (and possibly traditional software vendors) that are agnostic toward where their applications are deployed and who applies the pre-built automation that simplifies management. This next generation of vendors will more control in the hands of end customers (who crave control), while maintaining what vendors have come to love about cloud-native development and cloud-based resources.

So, yes, Marc Benioff and Salesforce were absolutely right to champion the “No Software” movement over the past two decades, because the model of enterprise software they targeted needed to be destroyed. In the process, however, Salesforce helped spur a cloud computing movement that would eventually rewrite the rules on enterprise IT and, now, SaaS itself.


Source: The Tech Crunch

Read More