Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Huawei: “The US security accusation of our 5G has no evidence. Nothing.”

Posted by on Feb 26, 2019 in 4G, 5g, 5g security, Asia, cellular networks, China, Edward Snowden, Europe, European Commission, European Union, geopolitics, Huawei, Mariya Gabriel, Mobile, mwc 2019, Network Security, Security, telecommunications, trump, United States | 0 comments

Huawei’s rotating chairman Guo Ping kicked off a keynote speech this morning at the world’s biggest mobile industry tradeshow with a wry joke. “There has never been more interest in Huawei,” he told delegates at Mobile World Congress. “We must be doing something right!”

The Chinese company is seeking to dispel suspicion around the security of its 5G network equipment which has been accelerated by U.S. president Trump who has been urging U.S. allies not to buy kit or services from Huawei. (And some, including Australia, have banned carriers from using Huawei kit.)

Last week Trump also tweet-shamed U.S. companies — saying they needed to step up their efforts to rollout 5G networks or “get left behind”.

In an MWC keynote speech yesterday the European Commission’s digital commissioner Mariya Gabriel signalled the executive is prepared to step in and regulate to ensure a “common approach” on the issue of network security — to avoid the risk of EU member states taking individual actions that could delay 5G rollouts across Europe.

Huawei appeared to welcome the prospect today.

“Government and the mobile operators should work together to agree what this assurance testing and certification rating for Europe will be,” said Guo, suggesting that’s Huawei’s hope for any Commission action on 5G security.

“Let experts decide whether networks are safe or not,” he added, implying Trump is the opposite of an expert. “Huawei has a strong track record in security for three decades. Serving three billion people around the world. The U.S. security accusation of our 5G has no evidence. Nothing.”

Geopolitical tensions about network security have translated into the biggest headache for Huawei which has positioned itself as a key vendor for 5G kit right as carriers are preparing to upgrade their existing cellular networks to the next-gen flavor.

Guo claimed today that Huawei is “the first company who can deploy 5G networks at scale”, giving a pitch for what he described as “powerful, simple and intelligent” next-gen network kit, while clearly enjoying the opportunity of being able to agree with U.S. president Trump in public — that “the U.S. needs powerful, faster and smarter 5G”. 🔥

But any competitive lead in next-gen network tech also puts the company in prime position for political blowback linked to espionage concerns related to the Chinese state’s access to data held or accessed by commercial companies.

Huawei’s strategy to counter this threat has been to come out fighting for its commercial business — and it had plenty more of that spirit on show this morning. As well as a bunch of in-jokes. Most notably a reference to NSA whistleblower Edward Snowden which drew a knowing ripple of laughter from the audience.

“We understand innovation is nothing without security,” said Guo, segwaying from making a sales pitch for Huawei’s 5G network solutions straight into the giant geopolitical security question looming over the conference.

“Prism, prism on the wall who is the most trustworthy of them all?” he said, throwing up a colorful slide to illustrate the joke. “It’s a very important question. And if you don’t ask them that you can go ask Edward Snowden.”

You can’t use “a crystal ball to manage cybersecurity”, Guo went on, dubbing it “a challenge we all share” and arguing that every player in the mobile industry has responsibility to defuse the network security issue — from kit vendors to carriers and standards bodies, as well as regulators.

“With 5G we have made a lot of progress over 4G and we can proudly say that 5G is safer than 4G. As a vendor we don’t operate carriers network, and we don’t all carry data. Our responsibility — what we promise — is that we don’t do anything bad,” he said. “We don’t do bad things.”

“Let me says this as clear as possible,” he went on, putting up another slide that literally underlined the point. “Huawei has not and will never plant backdoors. And we will never allow anyone to do so in our equipment.

“We take this responsibility very seriously.”

Guo’s pitch on network trust and security was to argue that where 5G networks are concerned security is a collective industry responsibility — which in turn means every player in the chain plays a monitoring role that allows for networks to be collectively trusted.

“Carriers are responsible for secure operations of their own networks. 5G networks are private networks. The boundary between different networks are clear. Carriers can prevent outside attacks with firewalls and security gateways. For internal threats carriers can manage, monitor and audit all vendors and partners to make sure their network elements are secure,” he said, going on to urge the industry to work together on standards which he described as “our shared responsibility”.

“To build safer networks we need to standardize cybersecurity requirements and these standards must be verifiable for all vendors and all carriers,” he said, adding that Huawei “fully supports” the work of industry standards and certification bodies the GSMA and 3GPP who he also claimed have “strong capabilities to verify 5G’s security”.

Huawei’s strategy to defuse geopolitical risk by appealing to the industry as a whole to get behind tackling the network trust issue is a smart one given the uncertainty generated by Trump’s attacks is hardly being welcomed by anyone in the mobile business.

Huawei’s headache might lead to the industry as a whole catching a cold — and no one at MWC wants that.

Later in the keynote Guo also pointed to the awkward “irony” of the U.S Cloud Act — given the legislation allows U.S. entities to “access data across borders”.

U.S. overreach on accessing the personal data of foreign citizens continues to cause major legal headaches in Europe as a result of the clash between its national security interest and EU citizens fundamental privacy rights. So Guo’s point there won’t have been lost on an MWC audience packed with European delegates attending the annual tradeshow.

“So for best technology and greater security choose Huawei. Please choose Huawei!” Guo finished, ending his keynote with a line that could very well make it as an upbeat marketing slogan writ large on one of the myriad tech-packed booths here at Fira Gran Via, Barcelona.


Source: The Tech Crunch

Read More

Tor pulls in record donations as it lessens reliance on US government grants

Posted by on Jan 11, 2019 in android, brave, Brendan Eich, carnegie mellon, censorship, censorshit, DuckDuckGo, Edward Snowden, Federal Bureau of Investigation, firefox, Mozilla, TC, tor, U.S. government, United States | 0 comments

Tor, the open-source initiative that provides a more secure way to access the internet, is continuing to diversify its funding away from its long-standing reliance on U.S. government grants.

The Tor Foundation — the organization behind the service which stands for “The Onion Router” — announced this week that it brought in a record $460,000 from individual donors in 2018. In addition, recently released financial information shows it raised a record $4.13 million from all sources in 2017 thanks to a growth in non-U.S. government donors.

The individual donation push represents an increase on the $400,000 it raised in 2017. A large part of that is down to Tor ally Mozilla, which once again pledged to match donations in the closing months of the year, while an anonymous individual matched all new backers who pledged up to $20,000.

Overall, the foundation said that it attracted donations from 115 countries worldwide in 2018, which reflects its importance outside of the U.S.

The record donation haul comes weeks after the Tor Foundation quietly revealed its latest financials — for 2017 — which show it has lessened its dependence on U.S. government sources. That’s been a key goal for some time, particularly after allegations that the FBI paid Carnegie Mellon researchers to help crack Tor, which served as a major motivation for the introduction of fundraising drives in 2015.

Back in 2015, U.S. government sources accounted for 80-90 percent of its financial backing, but that fell to just over 50 percent in 2017. The addition of a Swedish government agency, which provided $600,000, helped on that front, as well as corporate donations from Mozilla ($520,000) and DuckDuckGo ($25,000), more than $400,000 from a range of private foundations, and, of course, those donations from individuals.

Tor is best known for being used by NSA whistleblower Edward Snowden but, with governments across the world cracking down on the internet, it is a resource that’s increasingly necessary if we are to guard the world’s right to a free internet.

Tor has certainly been busy making its technology more accessible over the last year.

It launched its first official mobile browser for Android in September, and the same month it released TorBrowser 8.0, its most usable browser yet, which is based on Firefox’s 2017 Quantum structure. It has also worked closely with Mozilla to bring Tor into Firefox itself as it has already done with Brave, a browser firm led by former Mozilla CEO Brendan Eich.

Beyond the browser and the Tor network itself, which is designed to minimize the potential for network surveillance, the organization also develops a range of other projects. More than two million people are estimated to use Tor, according to data from the organization.


Source: The Tech Crunch

Read More

Khashoggi’s fate shows the flip side of the surveillance state

Posted by on Oct 20, 2018 in Edward Snowden, Government, Jamal Khashoggi, law enforcement, mass surveillance, Mohammed Bin Salman, national security, Privacy, russia, Saudi Arabia, Security, Softbank, Storage, surveillance, TC, trump, Turkey, Venture Capital, Vision Fund, Visual Computing | 0 comments

It’s been over five years since NSA whistleblower Edward Snowden lifted the lid on government mass surveillance programs, revealing, in unprecedented detail, quite how deep the rabbit hole goes thanks to the spread of commercial software and connectivity enabling a bottomless intelligence-gathering philosophy of ‘bag it all’.

Yet technology’s onward march has hardly broken its stride.

Government spying practices are perhaps more scrutinized, as a result of awkward questions about out-of-date legal oversight regimes. Though whether the resulting legislative updates, putting an official stamp of approval on bulk and/or warrantless collection as a state spying tool, have put Snowden’s ethical concerns to bed seems doubtful — albeit, it depends on who you ask.

The UK’s post-Snowden Investigatory Powers Act continues to face legal challenges. And the government has been forced by the courts to unpick some of the powers it helped itself to vis-à-vis people’s data. But bulk collection, as an official modus operandi, has been both avowed and embraced by the state.

In the US, too, lawmakers elected to push aside controversy over a legal loophole that provides intelligence agencies with a means for the warrantless surveillance of American citizens — re-stamping Section 702 of FISA for another six years. So of course they haven’t cared a fig for non-US citizens’ privacy either.

Increasingly powerful state surveillance is seemingly here to stay, with or without adequately robust oversight. And commercial use of strong encryption remains under attack from governments.

But there’s another end to the surveillance telescope. As I wrote five years ago, those who watch us can expect to be — and indeed are being — increasingly closely watched themselves as the lens gets turned on them:

“Just as our digital interactions and online behaviour can be tracked, parsed and analysed for problematic patterns, pertinent keywords and suspicious connections, so too can the behaviour of governments. Technology is a double-edged sword – which means it’s also capable of lifting the lid on the machinery of power-holding institutions like never before.”

We’re now seeing some of the impacts of this surveillance technology cutting both ways.

With attention to detail, good connections (in all senses) and the application of digital forensics all sorts of discrete data dots can be linked — enabling official narratives to be interrogated and unpicked with technology-fuelled speed.

Witness, for example, how quickly the Kremlin’s official line on the Skripal poisonings unravelled.

After the UK released CCTV of two Russian suspects of the Novichok attack in Salisbury, last month, the speedy counter-claim from Russia, presented most obviously via an ‘interview’ with the two ‘citizens’ conducted by state mouthpiece broadcaster RT, was that the men were just tourists with a special interest in the cultural heritage of the small English town.

Nothing to see here, claimed the Russian state, even though the two unlikely tourists didn’t appear to have done much actual sightseeing on their flying visit to the UK during the tail end of a British winter (unless you count vicarious viewing of Salisbury’s wikipedia page).

But digital forensics outfit Bellingcat, partnering with investigative journalists at The Insider Russia, quickly found plenty to dig up online, and with the help of data-providing tips. (We can only speculate who those whistleblowers might be.)

Their investigation made use of a leaked database of Russian passport documents; passport scans provided by sources; publicly available online videos and selfies of the suspects; and even visual computing expertise to academically cross-match photos taken 15 years apart — to, within a few weeks, credibly unmask the ‘tourists’ as two decorated GRU agents: Anatoliy Chepiga and Dr Alexander Yevgeniyevich Mishkin.

When public opinion is faced with an official narrative already lacking credibility that’s soon set against external investigation able to closely show workings and sources (where possible), and thus demonstrate how reasonably constructed and plausible is the counter narrative, there’s little doubt where the real authority is being shown to lie.

And who the real liars are.

That the Kremlin lies is hardly news, of course. But when its lies are so painstakingly and publicly unpicked, and its veneer of untruth ripped away, there is undoubtedly reputational damage to the authority of Vladimir Putin.

The sheer depth and availability of data in the digital era supports faster-than-ever evidence-based debunking of official fictions, threatening to erode rogue regimes built on lies by pulling away the curtain that invests their leaders with power in the first place — by implying the scope and range of their capacity and competency is unknowable, and letting other players on the world stage accept such a ‘leader’ at face value.

The truth about power is often far more stupid and sordid than the fiction. So a powerful abuser, with their workings revealed, can be reduced to their baser parts — and shown for the thuggish and brutal operator they really are, as well as proved a liar.

On the stupidity front, in another recent and impressive bit of cross-referencing, Bellingcat was able to turn passport data pertaining to another four GRU agents — whose identities had been made public by Dutch and UK intelligence agencies (after they had been caught trying to hack into the network of the Organisation for the Prohibition of Chemical Weapons) — into a long list of 305 suggestively linked individuals also affiliated with the same GRU military unit, and whose personal data had been sitting in a publicly available automobile registration database… Oops.

There’s no doubt certain governments have wised up to the power of public data and are actively releasing key info into the public domain where it can be poured over by journalists and interested citizen investigators — be that CCTV imagery of suspects or actual passport scans of known agents.

A cynic might call this selective leaking. But while the choice of what to release may well be self-serving, the veracity of the data itself is far harder to dispute. Exactly because it can be cross-referenced with so many other publicly available sources and so made to speak for itself.

Right now, we’re in the midst of another fast-unfolding example of surveillance apparatus and public data standing in the way of dubious state claims — in the case of the disappearance of Washington Post journalist Jamal Khashoggi, who went into the Saudi consulate in Istanbul on October 2 for a pre-arranged appointment to collect papers for his wedding and never came out.

Saudi authorities first tried to claim Khashoggi left the consulate the same day, though did not provide any evidence to back up their claim. And CCTV clearly showed him going in.

Yesterday they finally admitted he was dead — but are now trying to claim he died quarrelling in a fistfight, attempting to spin another after-the-fact narrative to cover up and blame-shift the targeted slaying of a journalist who had written critically about the Saudi regime.

Since Khashoggi went missing, CCTV and publicly available data has also been pulled and compared to identify a group of Saudi men who flew into Istanbul just prior to his appointment at the consulate; were caught on camera outside it; and left Turkey immediately after he had vanished.

Including naming a leading Saudi forensics doctor, Dr Salah Muhammed al-Tubaigy, as being among the party that Turkish government sources also told journalists had been carrying a bone saw in their luggage.

Men in the group have also been linked to Saudi crown prince Mohammed bin Salman, via cross-referencing travel records and social media data.

“In a 2017 video published by the Saudi-owned Al Ekhbariya on YouTube, a man wearing a uniform name tag bearing the same name can be seen standing next to the crown prince. A user with the same name on the Saudi app Menom3ay is listed as a member of the royal guard,” writes the Guardian, joining the dots on another suspected henchman.

A marked element of the Khashoggi case has been the explicit descriptions of his fate leaked to journalists by Turkish government sources, who have said they have recordings of his interrogation, torture and killing inside the building — presumably via bugs either installed in the consulate itself or via intercepts placed on devices held by the individuals inside.

This surveillance material has reportedly been shared with US officials, where it must be shaping the geopolitical response — making it harder for President Trump to do what he really wants to do, and stick like glue to a regional US ally with which he has his own personal financial ties, because the arms of that state have been recorded in the literal act of cutting off the fingers and head of a critical journalist, and then sawing up and disposing of the rest of his body.

Attempts by the Saudis to construct a plausible narrative to explain what happened to Khashoggi when he stepped over its consulate threshold to pick up papers for his forthcoming wedding have failed in the face of all the contrary data.

Meanwhile, the search for a body goes on.

And attempts by the Saudis to shift blame for the heinous act away from the crown prince himself are also being discredited by the weight of data…

And while it remains to be seen what sanctions, if any, the Saudis will face from Trump’s conflicted administration, the crown prince is already being hit where it hurts by the global business community withdrawing in horror from the prospect of being tainted by bloody association.

The idea that a company as reputation-sensitive as Apple would be just fine investing billions more alongside the Saudi regime, in SoftBank’s massive Vision Fund vehicle, seems unlikely, to say the least.

Thanks to technology’s surveillance creep the world has been given a close-up view of how horrifyingly brutal the Saudi regime can be — and through the lens of an individual it can empathize with and understand.

Safe to say, supporting second acts for regimes that cut off fingers and sever heads isn’t something any CEO would want to become famous for.

The power of technology to erode privacy is clearer than ever. Down to the very teeth of the bone saw. But what’s also increasingly clear is that powerful and at times terrible capability can be turned around to debase power itself — when authorities themselves become abusers.

So the flip-side of the surveillance state can be seen in the public airing of the bloody colors of abusive regimes.

Turns out, microscopic details can make all the difference to geopolitics.

RIP Jamal Khashoggi


Source: The Tech Crunch

Read More

Privacy groups ask senators to confirm US surveillance oversight nominees

Posted by on Aug 29, 2018 in Edward Snowden, Government, mass surveillance, Privacy, Privacy and Civil Liberties Oversight Board, Security, trump | 0 comments

A coalition of privacy groups are calling on lawmakers to fill the vacant positions on the government’s surveillance oversight board, which hasn’t fully functioned in almost two years.

The Privacy and Civil Liberties Oversight Board, known as PCLOB, is a little-known but important group that helps to ensure that intelligence agencies and executive branch policies are falling within the law. The board’s work allows them to have access to classified programs run by the dozen-plus intelligence agencies and determine if they’re legal and effective, while balancing Americans’ privacy and civil liberties rights.

In its most recent unclassified major report in 2015, PCLOB called for an end of the NSA’s collection of Americans’ phone records.

But the board fell out of quorum when four members left the board last year, leaving just the chairperson. President Obama did not fill the vacancies before he left office, putting PCLOB’s work largely on ice.

A report by The Intercept said, citing obtained emails, that the board was “basically dead,” but things were looking up when President Trump earlier this year picked a bipartisan range of five nominees for the board, including a computer science and policy professor and a former senior Justice Department lawyer named in March. If confirmed by the Senate Judiciary Committee, the newly appointed members would put the board back into full swing.

Except the committee has dragged its feet. Hearings have only been heard on three nominees, but a vote has yet to be scheduled.

A total of 31 privacy organizations and rights groups, including the ACLU, Open Technology Institute and the Center for Democracy & Technology signed on to the letter calling on the senate panel to push forward with the hearings and vote on the nominees.

“During the eleven years since Congress created the PCLOB as an independent agency, it has only operated with a quorum for four and one-half years,” the letter said. “Without a quorum, the PCLOB cannot issue oversight reports, provide the agency’s advice, or build upon the agency foundations laid by the original members. It is also critical that the PCLOB operate with a full bipartisan slate of qualified individuals.”

The coalition called the lack of quorum a “lost opportunity to better inform the public and facilitate Congressional action.”

Given the continuing aftermath of the massive leak of classified documents by NSA whistleblower Edward Snowden, the board’s work is more important than ever, the letter said.

Spokespeople for the Senate Judiciary Committee did not respond to a request for comment.


Source: The Tech Crunch

Read More

After twenty years of Salesforce, what Marc Benioff got right and wrong about the cloud

Posted by on Jun 17, 2018 in Adobe, Amazon, Amazon Web Services, Atlassian, AWS, bigid, CIO, cloud applications, cloud computing, cloud-native computing, Column, computing, CRM, digitalocean, Dropbox, Edward Snowden, enterprise software, European Union, Facebook, Getty-Images, github enterprise, Google, hipchat, Infrastructure as a Service, iPhone, Marc Benioff, Microsoft, open source software, oracle, oracle corporation, Packet, RAM, SaaS, Salesforce, salesforce.com, slack, software as a service, software vendors, TC, United States, web services | 6 comments

As we enter the 20th year of Salesforce, there’s an interesting opportunity to reflect back on the change that Marc Benioff created with the software-as-a-service (SaaS) model for enterprise software with his launch of Salesforce.com.

This model has been validated by the annual revenue stream of SaaS companies, which is fast approaching $100 billion by most estimates, and it will likely continue to transform many slower-moving industries for years to come.

However, for the cornerstone market in IT — large enterprise-software deals — SaaS represents less than 25 percent of total revenue, according to most market estimates. This split is even evident in the most recent high profile “SaaS” acquisition of GitHub by Microsoft, with over 50 percent of GitHub’s revenue coming from the sale of their on-prem offering, GitHub Enterprise.  

Data privacy and security is also becoming a major issue, with Benioff himself even pushing for a U.S. privacy law on par with GDPR in the European Union. While consumer data is often the focus of such discussions, it’s worth remembering that SaaS providers store and process an incredible amount of personal data on behalf of their customers, and the content of that data goes well beyond email addresses for sales leads.

It’s time to reconsider the SaaS model in a modern context, integrating developments of the last nearly two decades so that enterprise software can reach its full potential. More specifically, we need to consider the impact of IaaS and “cloud-native computing” on enterprise software, and how they’re blurring the lines between SaaS and on-premises applications. As the world around enterprise software shifts and the tools for building it advance, do we really need such stark distinctions about what can run where?

Source: Getty Images/KTSDESIGN/SCIENCE PHOTO LIBRARY

The original cloud software thesis

In his book, Behind the Cloud, Benioff lays out four primary reasons for the introduction of the cloud-based SaaS model:

  1. Realigning vendor success with customer success by creating a subscription-based pricing model that grows with each customer’s usage (providing the opportunity to “land and expand”). Previously, software licenses often cost millions of dollars and were paid upfront, each year after which the customer was obligated to pay an additional 20 percent for support fees. This traditional pricing structure created significant financial barriers to adoption and made procurement painful and elongated.
  2. Putting software in the browser to kill the client-server enterprise software delivery experience. Benioff recognized that consumers were increasingly comfortable using websites to accomplish complex tasks. By utilizing the browser, Salesforce avoided the complex local client installation and allowed its software to be accessed anywhere, anytime and on any device.
  3. Sharing the cost of expensive compute resources across multiple customers by leveraging a multi-tenant architecture. This ensured that no individual customer needed to invest in expensive computing hardware required to run a given monolithic application. For context, in 1999 a gigabyte of RAM cost about $1,000 and a TB of disk storage was $30,000. Benioff cited a typical enterprise hardware purchase of $385,000 in order to run Siebel’s CRM product that might serve 200 end-users.
  4. Democratizing the availability of software by removing the installation, maintenance and upgrade challenges. Drawing from his background at Oracle, he cited experiences where it took 6-18 months to complete the installation process. Additionally, upgrades were notorious for their complexity and caused significant downtime for customers. Managing enterprise applications was a very manual process, generally with each IT org becoming the ops team executing a physical run-book for each application they purchased.

These arguments also happen to be, more or less, that same ones made by infrastructure-as-a-service (IaaS) providers such as Amazon Web Services during their early days in the mid-late ‘00s. However, IaaS adds value at a layer deeper than SaaS, providing the raw building blocks rather than the end product. The result of their success in renting cloud computing, storage and network capacity has been many more SaaS applications than ever would have been possible if everybody had to follow the model Salesforce did several years earlier.

Suddenly able to access computing resources by the hour—and free from large upfront capital investments or having to manage complex customer installations—startups forsook software for SaaS in the name of economics, simplicity and much faster user growth.

Source: Getty Images

It’s a different IT world in 2018

Fast-forward to today, and in some ways it’s clear just how prescient Benioff was in pushing the world toward SaaS. Of the four reasons laid out above, Benioff nailed the first two:

  • Subscription is the right pricing model: The subscription pricing model for software has proven to be the most effective way to create customer and vendor success. Years ago already, stalwart products like Microsoft Office and the Adobe Suite  successfully made the switch from the upfront model to thriving subscription businesses. Today, subscription pricing is the norm for many flavors of software and services.
  • Better user experience matters: Software accessed through the browser or thin, native mobile apps (leveraging the same APIs and delivered seamlessly through app stores) have long since become ubiquitous. The consumerization of IT was a real trend, and it has driven the habits from our personal lives into our business lives.

In other areas, however, things today look very different than they did back in 1999. In particular, Benioff’s other two primary reasons for embracing SaaS no longer seem so compelling. Ironically, IaaS economies of scale (especially once Google and Microsoft began competing with AWS in earnest) and software-development practices developed inside those “web scale” companies played major roles in spurring these changes:

  • Computing is now cheap: The cost of compute and storage have been driven down so dramatically that there are limited cost savings in shared resources. Today, a gigabyte of RAM is about $5 and a terabyte of disk storage is about $30 if you buy them directly. Cloud providers give away resources to small users and charge only pennies per hour for standard-sized instances. By comparison, at the same time that Salesforce was founded, Google was running on its first data center—with combined total compute and RAM comparable to that of a single iPhone X. That is not a joke.
  • Installing software is now much easier: The process of installing and upgrading modern software has become automated with the emergence of continuous integration and deployment (CI/CD) and configuration-management tools. With the rapid adoption of containers and microservices, cloud-native infrastructure has become the de facto standard for local development and is becoming the standard for far more reliable, resilient and scalable cloud deployment. Enterprise software packed as a set of Docker containers orchestrated by Kubernetes or Docker Swarm, for example, can be installed pretty much anywhere and be live in minutes.

Sourlce: Getty Images/ERHUI1979

What Benioff didn’t foresee

Several other factors have also emerged in the last few years that beg the question of whether the traditional definition of SaaS can really be the only one going forward. Here, too, there’s irony in the fact that many of the forces pushing software back toward self-hosting and management can be traced directly to the success of SaaS itself, and cloud computing in general:

  1. Cloud computing can now be “private”: Virtual private clouds (VPCs) in the IaaS world allow enterprises to maintain root control of the OS, while outsourcing the physical management of machines to providers like Google, DigitalOcean, Microsoft, Packet or AWS. This allows enterprises (like Capital One) to relinquish hardware management and the headache it often entails, but retain control over networks, software and data. It is also far easier for enterprises to get the necessary assurance for the security posture of Amazon, Microsoft and Google than it is to get the same level of assurance for each of the tens of thousands of possible SaaS vendors in the world.
  2. Regulations can penalize centralized services: One of the underappreciated consequences of Edward Snowden’s leaks, as well as an awakening to the sometimes questionable data-privacy practices of companies like Facebook, is an uptick in governments and enterprises trying to protect themselves and their citizens from prying eyes. Using applications hosted in another country or managed by a third party exposes enterprises to a litany of legal issues. The European Union’s GDPR law, for example, exposes SaaS companies to more potential liability with each piece of EU-citizen data they store, and puts enterprises on the hook for how their SaaS providers manage data.
  3. Data breach exposure is higher than ever: A corollary to the point above is the increased exposure to cybercrime that companies face as they build out their SaaS footprints. All it takes is one employee at a SaaS provider clicking on the wrong link or installing the wrong Chrome extension to expose that provider’s customers’ data to criminals. If the average large enterprise uses 1,000+ SaaS applications and each of those vendors averages 250 employees, that’s an additional 250,000 possible points of entry for an attacker.
  4. Applications are much more portable: The SaaS revolution has resulted in software vendors developing their applications to be cloud-first, but they’re now building those applications using technologies (such as containers) that can help replicate the deployment of those applications onto any infrastructure. This shift to what’s called cloud-native computing means that the same complex applications you can sign up to use in a multi-tenant cloud environment can also be deployed into a private data center or VPC much easier than previously possible. Companies like BigID, StackRox, Dashbase and others are taking a private cloud-native instance first approach to their application offerings. Meanwhile SaaS stalwarts like Atlassian, Box, Github and many others are transitioning over to Kubernetes driven, cloud-native architectures that provide this optionality in the future.  
  5. The script got flipped on CIOs: Individuals and small teams within large companies now drive software adoption by selecting the tools (e.g., GitHub, Slack, HipChat, Dropbox), often SaaS, that best meet their needs. Once they learn what’s being used and how it’s working, CIOs are faced with the decision to either restrict network access to shadow IT or pursue an enterprise license—or the nearest thing to one—for those services. This trend has been so impactful that it spawned an entirely new category called cloud access security brokers—another vendor that needs to be paid, an additional layer of complexity, and another avenue for potential problems. Managing local versions of these applications brings control back to the CIO and CISO.

Source: Getty Images/MIKIEKWOODS

The future of software is location agnostic

As the pace of technological disruption picks up, the previous generation of SaaS companies is facing a future similar to the legacy software providers they once displaced. From mainframes up through cloud-native (and even serverless) computing, the goal for CIOs has always been to strike the right balance between cost, capabilities, control and flexibility. Cloud-native computing, which encompasses a wide variety of IT facets and often emphasizes open source software, is poised to deliver on these benefits in a manner that can adapt to new trends as they emerge.

The problem for many of today’s largest SaaS vendors is that they were founded and scaled out during the pre-cloud-native era, meaning they’re burdened by some serious technical and cultural debt. If they fail to make the necessary transition, they’ll be disrupted by a new generation of SaaS companies (and possibly traditional software vendors) that are agnostic toward where their applications are deployed and who applies the pre-built automation that simplifies management. This next generation of vendors will more control in the hands of end customers (who crave control), while maintaining what vendors have come to love about cloud-native development and cloud-based resources.

So, yes, Marc Benioff and Salesforce were absolutely right to champion the “No Software” movement over the past two decades, because the model of enterprise software they targeted needed to be destroyed. In the process, however, Salesforce helped spur a cloud computing movement that would eventually rewrite the rules on enterprise IT and, now, SaaS itself.


Source: The Tech Crunch

Read More