Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Diving into Google Cloud Next and the future of the cloud ecosystem

Posted by on Apr 14, 2019 in Artificial Intelligence, Cloud, Developer, Enterprise, Events, Government, Personnel, SaaS, Startups, Talent, TC | 0 comments

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Frederic Lardinois and Ron Miller offered up their analysis on the major announcements that came out of Google’s Cloud Next conference this past week, as well as their opinions on the outlook for the company going forward.

Google Cloud announced a series of products, packages and services that it believes will improve the company’s competitive position and differentiate itself from AWS and other peers. Frederic and Ron discuss all of Google’s most promising announcements, including its product for managing hybrid clouds, its new end-to-end AI platform, as well as the company’s heightened effort to improve customer service, communication, and ease-of-use.

“They have all of these AI and machine learning technologies, they have serverless technologies, they have containerization technologies — they have this whole range of technologies.

But it’s very difficult for the average company to take these technologies and know what to do with them, or to have the staff and the expertise to be able to make good use of them. So, the more they do things like this where they package them into products and make them much more accessible to the enterprise at large, the more successful that’s likely going to be because people can see how they can use these.

…Google does have thousands of engineers, and they have very smart people, but not every company does, and that’s the whole idea of the cloud. The cloud is supposed to take this stuff, put it together in such a way that you don’t have to be Google, or you don’t have to be Facebook, you don’t have to be Amazon, and you can take the same technology and put it to use in your company”

Image via Bryce Durbin / TechCrunch

Frederic and Ron dive deeper into how the new offerings may impact Google’s market share in the cloud ecosystem and which verticals represent the best opportunity for Google to win. The two also dig into the future of open source in cloud and how they see customer use cases for cloud infrastructure evolving.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 


Source: The Tech Crunch

Read More

Moka raises $27M led by Hillhouse to make hiring more data-driven in China

Posted by on Mar 4, 2019 in Artificial Intelligence, Burger King, California, China, ggv capital, GSR Ventures, Hillhouse Capital, Hiring, JD.com, moka, recruitment, SaaS, Stanford University, TC, Tencent, turo, University of California, University of Michigan, Xiaomi | 0 comments

Moka, a startup that wants to make talent acquisition a little more data-driven for China-based companies that range from smartphone giant Xiaomi to Burger King’s local business, announced Monday that it has raised a 180 million yuan ($27 million) Series B round of funding.

The deal was led by Hillhouse Capital, an investor in top Chinese technology companies such as Tencent, Baidu, JD.com, Pinduoduo — just to name a few. Other investors who took part include Xianghe Capital, an investment firm founded by two former Baidu executives, Chinese private equity firm GSR Ventures and GGV Capital.

Moka claims more than 500 enterprise customers were paying for its services by the end of 2018. Other notable clients are McDonalds and one of China’s top livestreaming services YY. It plans to use its new capital to hire staff, build new products and expand the scope of its business.

Founded in 2015, Moka compares itself to Workday and Salesforce in the U.S. It has created a suite of software aiming to make recruiting easier and cheaper for companies with upwards of 500 employees. Its solutions take care of the full cycle of hiring. To start with, Moka allows recruiters to post job listings across multiple platforms with one click, saving them the hassle of hopping between portals. Its AI-enabled screening program then automatically filters candidates and make recommendations for companies. What comes next is the interview, which Moka helps streamline with automatic email and message reminders for job applicants and optimized plans for interviewers on when and where to meet their candidates.

That’s not the end, as Moka also wants to capture what happens after the talent is onboard. The startup helps companies maintain a talent database consist of existing employees and potential hires. The services allow companies to keep a close tap on their staff, whose resume update will trigger a warning to the employer, and alerts the recruiter once the system detects suitable candidates.

Moka is among a wave of startups founded by Chinese entrepreneurs with foreign education and work experiences. Zhao Oulun, whose English nickname is Orion, graduated from the University of California, Berkley and worked at San Francisco-based peer-to-peer car sharing company Turo before founding Moka with Li Guoxing. Li himself is also a “sea turtle,” a colloquial term in Chinese that describes overseas-educated graduates who return home to work. Li graduated from the University of Michigan and Stanford University, and had worked at Facebook as an engineer.

When the founders re-entered China, they saw something was missing in the booming domestic business environment: effective talent management.

“Businesses are flourishing, but at the same time many of them fall short in internal organization and operation. To a large extent, the issue pertains to the lack of digital and meticulous operation for human resources, which slows down decision-making and leads to mistakes around talents and company organization,” says chief executive Zhao in a statement.

Moka’s mission has caught the attention of investors. Jixun Foo, a partner at Moka backer GGV Capital, also believes China’s businesses can benefit from a data-driven approach to people management: “We are positive about Moka becoming a comprehensive HR service provider in the future through its unique data-powered and intelligent solutions.”


Source: The Tech Crunch

Read More

Measuring and benchmarking the four vital signs of SaaS

Posted by on Feb 26, 2019 in benchmarking, Column, SaaS | 0 comments

SaaS metrics should be to a management team what patient vital signs are to an emergency room doctor: a simple set of universally understood numbers that allow a doctor to quickly know how ill a patient is and what needs fixing first.

Heart rate, blood pressure, respiratory rate and temperature are the big four vital signs in the ER. Everyone knows what they are, what they mean, and what good and bad looks like. When a patient is wheeled in, the doctor does not start by asking the EMT, “How exactly are we defining heart rate?” This shared understanding allows for rapid evaluation, then fast, focused action.

Not so much in SaaS, where discussions about definitions are all you hear. There are too many metrics, too many things to measure and too many useful but incompatible ways to measure them. This results in a loss of clarity, comprehension, and — most importantly — comparability across different companies.

At Scale we’ve spent the last 20 years evaluating investments in SaaS and other subscription companies. We have built an internal shared belief of what the Four Vital Signs of SaaS are, and how exactly to measure them. We have opted for simplicity over complexity in selecting these metrics. This has allowed us to benchmark accurately across companies and to know what a realistic version of “good” looks like.

Scale recently launched Scale Studio, an open-to-anyone tool that gives cloud and SaaS companies performance benchmarks based on these vital signs and 20 years of data across more than 300 companies.

The four vital signs of SaaS

The vital signs of SaaS are Revenue Growth, Sales Efficiency, Revenue Churn and Cash Burn. Almost everything that matters about the financial performance of a SaaS business is captured in these four metrics.

Revenue Growth matters because growth is the central purpose of a startup, and thus for an investor the most important driver of if value can be created at all. We have found that at each stage of a company’s development there is a minimum required level of growth below which a startup will struggle to attract venture capital. We’ve analyzed this Mendoza Line for SaaS growth previously on TechCrunch.

Sales Efficiency matters because software at scale is all about distribution, and thus the relationship between dollars invested in sales and marketing and dollars back via revenue is the key determinant of how much value is created per dollar invested. In a perfect world I would call this Distribution Efficiency, because calling it Sales Efficiency tends wrongly to narrow the focus on this metric to just sales, but that ship has sailed.

Revenue Churn matters because as growth slows the impact of churn escalates and provides an upper bound on how big a company can become. More fundamentally, high churn is just the financial evidence of a product that is not delivering value to customers. Products that do not deliver value cannot build value for their investors, which is why my former board colleague at Box, Mamoon Hamid, is right in saying that every company needs a non-financial north star.

And of course, Cash Burn matters because, well, duh — try running a company without it.

Measuring and benchmarking the four vital signs of SaaS

Vital Sign No. 1: Revenue Growth

There are multiple ways to measure revenue and thus Revenue Growth. ARR-based metrics are more forward-looking, but GAAP revenue tends to be calculated more accurately and is thus more comparable across companies. It typically lags ARR by about a quarter, and for simplicity we have not excluded services revenue. The simplest measure of Revenue Growth is the quarterly GAAP revenue run rate compared to the same quarter GAAP revenue one year ago (or a year from now for a forward-growth estimate). We also measure revenue growth using the ARR Growth Rate and a forward-looking measure of ARR growth that we call iCAGR.

We benchmark Revenue Growth by looking at companies at a comparable revenue run rate because revenue growth rates decline fairly predictably as absolute revenue scale increases (as is clear in the chart below). This means that the top quartile Revenue Growth rate of 123 percent at a $20 million revenue run rate would represent bottom quartile growth at a $2 million revenue run rate.

The chart and table below show, for various revenue run rates, which revenue growth rates represent top, median and bottom quartile performance. The Scale Studio data set has 300+ public and private SaaS companies, some of which have become public, many of which have not and most of which have raised at least some venture capital.

Using this table, a team can quickly benchmark their company’s performance. For example, a company that grew last year 80 percent from $11 million to $20 million is growing at just above 50th percentile growth for SaaS companies at that stage, which is 78 percent. We can also generate a separate table showing the same data on a forward-revenue basis, to allow a company to answer the related question: If I am at a $20 million run rate now, and I grow next year at 50 percent, how will I be doing relative to other SaaS companies?

It is also interesting to see that the data here broadly agrees with a separate calculation we did recently around the Mendoza Line for SaaS growth that tracks at or just above the bottom quartile of growth rate. The Mendoza Line was derived by math; this table was generated from real data — it is good to see both estimates roughly agree.

Another way to look at the same data is to think about the revenue trajectory over time, or “how many years to $100 million.”  The graph and table below show for a consistent top, median or bottom quartile company (at the cutoff points) how long it takes to grow from $1 million to $100 million. A company that grows consistently, just at the top quartile cutoff growth rate, takes six years to get to a $100 million run rate, a median performer takes eight years and a bottom quartile performer does not yet get there in 10 years. This is a calculation with all sorts of survivorship bias problems, because the slow growth companies tend to get acquired and not make it all the way to $100 million, but the analysis is roughly right.

This data also matches well to various rules of thumb. An example is the T2D3 (triple-triple-double-double-double) rule, which is also shown in the table above. T2D3 matches top quartile performance for the first four years and becomes just a little aspirational in year five. If you fail to double from year four to year five and only grow at 90 percent, we would still be glad to talk to you! (The data also roughly matches the Bessemer State of the Cloud Report, which shows an estimate of times to $100 million for best-in-class companies).

Vital Sign No. 2: Sales Efficiency

Sales Efficiency metrics (and again the reminder to think of this more broadly as Distribution Efficiency!) measure the relationship between dollars in (spent on Sales & Marketing) and dollars out (in the form of new revenue). For a recurring revenue business, by far the most intuitive way to measure this concept is by dividing the Gross or Net New ARR for the quarter by the fully loaded Sales & Marketing spend for the same quarter. The Gross SE metric measures the effectiveness of the company in generating new ARR, and the Net SE metric measures the overall effectiveness of the business in both generating and retaining revenue.

We love the simplicity of this calculation and its direct actionability. It can be explained in 30 seconds at a Sales Kick-Off meeting in a way that no other measure can. “We gave you this money, you gave us this ARR.” We are not a fan of putting lags in this method (comparing this quarter’s Net New ARR with last quarter’s S&M spend). There is some logic to the idea that there is lag between spend and results, but once you start to adjust, you end up with all sorts of special pleading. Keep it simple.

For a vital signs diagnosis, we prefer this metric to the complexity of the LTV/CAC calculation. An LTV to CAC works really well for consumer businesses and for B2B businesses that have fairly consistent deal size and low net churn. A former Scale portfolio company, HubSpot, did a brilliant job of orienting their business around this metric. However, for enterprise businesses with highly variable deal sizes and strong positive net cohort growth over time, the calculation becomes arbitrary. The underlying idea is real, namely that enterprise customers can have lower Gross SE but higher ultimate value as the cohorts grow, but trying to track and explain quarterly fluctuations is hard.

Another complexity we choose to avoid is using Gross Margin instead of Revenue. It is of course more correct to use Gross Margin, but especially at the early stages of a SaaS company, Gross Margin fluctuates based on fixed cost recovery issues that significantly distort the calculation.

The problem with an ARR-based Sales Efficiency metric is it doesn’t allow easy comparison across companies. ARR is not reported by public companies and private company ARR numbers are often suspect. Our workaround was to slightly tweak the calculation for Net SE, replacing the numerator (Net New ARR) with the intra-quarter difference in GAAP revenue multiplied by 4 (annualized). We call this formula using GAAP instead of ARR the Magic Number and it should be equal to Net Sales Efficiency with a one quarter lag (to allow ARR to convert to GAAP).

As a rule of thumb: When you’re talking to your team and want to keep it simple, talk Gross and Net Sales Efficiency; when you want to do benchmarking against other companies, use Magic Number.

The benchmarking results here are very different. Unlike Revenue Growth, which clearly declines as absolute revenue increases, we have found Sales Efficiency to be fairly consistent across the entire SaaS universe. The median Magic Number for our data set is between .8x and .7x, and the range from top quartile to bottom quartile is between .5x and 1.5x. This matches the public company data set where the median is .7x today.

Payback (on a revenue not a gross margin basis) is simply the inverse of this number, which implies that the average SaaS company is earning back in revenue what it spends in sales and marketing in one divided by .7 years — 17 months — with a top/bottom quartile range of eight months to two years.

We have also observed something that does not come in this table, which is that Sales Efficiency tends to be persistent over time for a given company, especially after $10 million. A good go-to-market model at a $10 million run rate tends to still be a good model at $100 million. And bad sales efficiency at $10 million is hard to change later.

The high top-quartile Magic Number for the $1 million revenue run rate represents an anomaly early on, as often founders are doing the selling themselves (and probably not allocating their costs to Sales & Marketing!). Pretty quickly top quartile Magic Number falls to 1.4x and then to 1.0x at scale.

Vital Sign No. 3: Revenue Churn

The simplest way to measure Gross and Net Churn is by taking Churned ARR (Gross) and Churned less Upsell ARR (Net) and dividing it by opening ARR for the period, usually a quarter. In the tables below, we show Gross Churn by quarter and annualized.

We acknowledge that this metric is a horrible oversimplification. For the Sales Efficiency calculation above, the simple method is also, we believe, the best method, but for the churn calculation this simplification comes at a significant cost in terms of being able to diagnosis underlying issues. At high growth rates especially, this measure understates actual churn. However, the vital signs framework calls for simplicity to allow consistent, relevant benchmarking across companies. If this simple benchmarking exercise exposes a churn problem, then a deeper dive using retention analysis and a cohort analysis is an absolutely required next step.

The data above shows Darwinian selection at work. Early on some companies have huge churn but they have to either improve or die. At a $20 million revenue run rate, even bottom quartile companies have annualized Gross Churn hovering around -22 percent.

Vital Sign No. 4: Cash Burn / Operating Income

Internally, we measure cash burn by looking at free cash flow for a quarter (operating cash flow less capex) and compare it to cash on the balance sheet to calculate a cash out date. For confidentiality reasons, we do not ask for cash balances in Scale Studio. A reasonable proxy for cash burn is Operating Income, and the chart and table below show Operating Income as a percent of Revenue at different revenue run rates.

To illustrate what this metric means, at a $20 million revenue run rate the median company in the data set is losing 63 percent of revenue, $12.7 million dollars, or colloquially “burning $1 million a month.” In Scale Studio you can also further benchmark total operating expenses one level down, across each of Gross Margin, Sales/Marketing, R&D and G&A.

The most important point to make about this metric is that in a recurring revenue business, operating income, or “burn,” however calculated is not a measure of efficiency. Instead it is a measure of how aggressively a company is investing. A high operating loss coupled with a high growth rate and high sales efficiency is an aggressive but probably sensible strategy provided of course the company has access to capital. A high burn, low growth company is a disaster in the making.

Exactly how much burn for exactly how much growth will be the subject for another post, but any comment that tries to link burn rate to value creation, without taking growth rate into account, is simply wrong. Many of the most successful SaaS companies were in the bottom quartile on this metric at $100 million in run rate revenue, but were also in the top quartile on revenue growth. Worth highlighting again is the proviso regarding access to capital. If the cash runs, out even the best business dies.

The very best companies are those such as Veeva and Atlassian where a high Sales Efficiency allowed them to simultaneously be top quartile on growth, and top quartile on operating income profitability. It is no accident that companies with these characteristics get premium public valuations.

What are your company’s vital signs?

Getting started with Scale Studio is simple: you enter nine basic data points for each of your trailing eight quarters then generate a benchmark report. The benchmarks use a sample set consisting of companies at the same revenue stage as yours. This allows for much more accurate benchmarking, especially for Revenue Growth and Operating Income which, as we have said, are a direct function of revenue run rate. The benchmarks give you a sense of your performance that is clear, concise, and comparable. Your report might say something like:

“At your current revenue run rate of $5 million, your Y/Y Revenue Growth rate of 150 percent is in the second quartile for companies of your size, your Magic Number of 0.8 is in the second quartile, your Gross Churn of -1 percent is in the top quartile, and your Operating Income of -152 percent of revenue is in the second quartile.”

Vital signs don’t cure patients, doctors do. SaaS vital signs don’t fix companies, management teams do. But realistic benchmarking metrics do what ER vital signs do: pinpoint issues, provide actionable context and allow you to get to work.

Jeremy Kaufmann contributed to this article.


Source: The Tech Crunch

Read More

Has the fight over privacy changed at all in 2019?

Posted by on Jan 26, 2019 in Advertising Tech, Albert Gidari, big data, Box, brave, Center for Democracy and Technology, Christopher Wolf, Cloud, Community, data, data privacy, data usage, DuckDuckGo, eCommerce, Enterprise, European Union, Future of Privacy Forum, Gabriel Weinberg, GDPR, Google, Government, Hogan Lovells, Information Technology Industry Council, Internet Association, Johnny Ryan, Melika Carroll, Opinion, Philanthropy, Policy, Privacy, SaaS, Security, stanford, Stanford University, Startups, TC, the internet association, user data | 0 comments

Few issues divide the tech community quite like privacy. Much of Silicon Valley’s wealth has been built on data-driven advertising platforms, and yet, there remain constant concerns about the invasiveness of those platforms.

Such concerns have intensified in just the last few weeks as France’s privacy regulator placed a record fine on Google under Europe’s General Data Protection Regulation (GDPR) rules which the company now plans to appeal. Yet with global platform usage and service sales continuing to tick up, we asked a panel of eight privacy experts: “Has anything fundamentally changed around privacy in tech in 2019? What is the state of privacy and has the outlook changed?” 

This week’s participants include:

TechCrunch is experimenting with new content forms. Consider this a recurring venue for debate, where leading experts – with a diverse range of vantage points and opinions – provide us with thoughts on some of the biggest issues currently in tech, startups and venture. If you have any feedback, please reach out: Arman.Tabatabai@techcrunch.com.


Thoughts & Responses:


Albert Gidari

Albert Gidari is the Consulting Director of Privacy at the Stanford Center for Internet and Society. He was a partner for over 20 years at Perkins Coie LLP, achieving a top-ranking in privacy law by Chambers, before retiring to consult with CIS on its privacy program. He negotiated the first-ever “privacy by design” consent decree with the Federal Trade Commission. A recognized expert on electronic surveillance law, he brought the first public lawsuit before the Foreign Intelligence Surveillance Court, seeking the right of providers to disclose the volume of national security demands received and the number of affected user accounts, ultimately resulting in greater public disclosure of such requests.

There is no doubt that the privacy environment changed in 2018 with the passage of California’s Consumer Privacy Act (CCPA), implementation of the European Union’s General Data Protection Regulation (GDPR), and new privacy laws enacted around the globe.

“While privacy regulation seeks to make tech companies betters stewards of the data they collect and their practices more transparent, in the end, it is a deception to think that users will have more “privacy.””

For one thing, large tech companies have grown huge privacy compliance organizations to meet their new regulatory obligations. For another, the major platforms now are lobbying for passage of a federal privacy law in the U.S. This is not surprising after a year of privacy miscues, breaches and negative privacy news. But does all of this mean a fundamental change is in store for privacy? I think not.

The fundamental model sustaining the Internet is based upon the exchange of user data for free service. As long as advertising dollars drive the growth of the Internet, regulation simply will tinker around the edges, setting sideboards to dictate the terms of the exchange. The tech companies may be more accountable for how they handle data and to whom they disclose it, but the fact is that data will continue to be collected from all manner of people, places and things.

Indeed, if the past year has shown anything it is that two rules are fundamental: (1) everything that can be connected to the Internet will be connected; and (2) everything that can be collected, will be collected, analyzed, used and monetized. It is inexorable.

While privacy regulation seeks to make tech companies betters stewards of the data they collect and their practices more transparent, in the end, it is a deception to think that users will have more “privacy.” No one even knows what “more privacy” means. If it means that users will have more control over the data they share, that is laudable but not achievable in a world where people have no idea how many times or with whom they have shared their information already. Can you name all the places over your lifetime where you provided your SSN and other identifying information? And given that the largest data collector (and likely least secure) is government, what does control really mean?

All this is not to say that privacy regulation is futile. But it is to recognize that nothing proposed today will result in a fundamental shift in privacy policy or provide a panacea of consumer protection. Better privacy hygiene and more accountability on the part of tech companies is a good thing, but it doesn’t solve the privacy paradox that those same users who want more privacy broadly share their information with others who are less trustworthy on social media (ask Jeff Bezos), or that the government hoovers up data at rate that makes tech companies look like pikers (visit a smart city near you).

Many years ago, I used to practice environmental law. I watched companies strive to comply with new laws intended to control pollution by creating compliance infrastructures and teams aimed at preventing, detecting and deterring violations. Today, I see the same thing at the large tech companies – hundreds of employees have been hired to do “privacy” compliance. The language is the same too: cradle to grave privacy documentation of data flows for a product or service; audits and assessments of privacy practices; data mapping; sustainable privacy practices. In short, privacy has become corporatized and industrialized.

True, we have cleaner air and cleaner water as a result of environmental law, but we also have made it lawful and built businesses around acceptable levels of pollution. Companies still lawfully dump arsenic in the water and belch volatile organic compounds in the air. And we still get environmental catastrophes. So don’t expect today’s “Clean Privacy Law” to eliminate data breaches or profiling or abuses.

The privacy world is complicated and few people truly understand the number and variety of companies involved in data collection and processing, and none of them are in Congress. The power to fundamentally change the privacy equation is in the hands of the people who use the technology (or choose not to) and in the hands of those who design it, and maybe that’s where it should be.


Gabriel Weinberg

Gabriel Weinberg is the Founder and CEO of privacy-focused search engine DuckDuckGo.

Coming into 2019, interest in privacy solutions is truly mainstream. There are signs of this everywhere (media, politics, books, etc.) and also in DuckDuckGo’s growth, which has never been faster. With solid majorities now seeking out private alternatives and other ways to be tracked less online, we expect governments to continue to step up their regulatory scrutiny and for privacy companies like DuckDuckGo to continue to help more people take back their privacy.

“Consumers don’t necessarily feel they have anything to hide – but they just don’t want corporations to profit off their personal information, or be manipulated, or unfairly treated through misuse of that information.”

We’re also seeing companies take action beyond mere regulatory compliance, reflecting this new majority will of the people and its tangible effect on the market. Just this month we’ve seen Apple’s Tim Cook call for stronger privacy regulation and the New York Times report strong ad revenue in Europe after stopping the use of ad exchanges and behavioral targeting.

At its core, this groundswell is driven by the negative effects that stem from the surveillance business model. The percentage of people who have noticed ads following them around the Internet, or who have had their data exposed in a breach, or who have had a family member or friend experience some kind of credit card fraud or identity theft issue, reached a boiling point in 2018. On top of that, people learned of the extent to which the big platforms like Google and Facebook that collect the most data are used to propagate misinformation, discrimination, and polarization. Consumers don’t necessarily feel they have anything to hide – but they just don’t want corporations to profit off their personal information, or be manipulated, or unfairly treated through misuse of that information. Fortunately, there are alternatives to the surveillance business model and more companies are setting a new standard of trust online by showcasing alternative models.


Melika Carroll

Melika Carroll is Senior Vice President, Global Government Affairs at Internet Association, which represents over 45 of the world’s leading internet companies, including Google, Facebook, Amazon, Twitter, Uber, Airbnb and others.

We support a modern, national privacy law that provides people meaningful control over the data they provide to companies so they can make the most informed choices about how that data is used, seen, and shared.

“Any national privacy framework should provide the same protections for people’s data across industries, regardless of whether it is gathered offline or online.”

Internet companies believe all Americans should have the ability to access, correct, delete, and download the data they provide to companies.

Americans will benefit most from a federal approach to privacy – as opposed to a patchwork of state laws – that protects their privacy regardless of where they live. If someone in New York is video chatting with their grandmother in Florida, they should both benefit from the same privacy protections.

It’s also important to consider that all companies – both online and offline – use and collect data. Any national privacy framework should provide the same protections for people’s data across industries, regardless of whether it is gathered offline or online.

Two other important pieces of any federal privacy law include user expectations and the context in which data is shared with third parties. Expectations may vary based on a person’s relationship with a company, the service they expect to receive, and the sensitivity of the data they’re sharing. For example, you expect a car rental company to be able to track the location of the rented vehicle that doesn’t get returned. You don’t expect the car rental company to track your real-time location and sell that data to the highest bidder. Additionally, the same piece of data can have different sensitivities depending on the context in which it’s used or shared. For example, your name on a business card may not be as sensitive as your name on the sign in sheet at an addiction support group meeting.

This is a unique time in Washington as there is bipartisan support in both chambers of Congress as well as in the administration for a federal privacy law. Our industry is committed to working with policymakers and other stakeholders to find an American approach to privacy that protects individuals’ privacy and allows companies to innovate and develop products people love.


Johnny Ryan

Dr. Johnny Ryan FRHistS is Chief Policy & Industry Relations Officer at Brave. His previous roles include Head of Ecosystem at PageFair, and Chief Innovation Officer of The Irish Times. He has a PhD from the University of Cambridge, and is a Fellow of the Royal Historical Society.

Tech companies will probably have to adapt to two privacy trends.

“As lawmakers and regulators in Europe and in the United States start to think of “purpose specification” as a tool for anti-trust enforcement, tech giants should beware.”

First, the GDPR is emerging as a de facto international standard.

In the coming years, the application of GDPR-like laws for commercial use of consumers’ personal data in the EU, Britain (post-EU), Japan, India, Brazil, South Korea, Malaysia, Argentina, and China will bring more than half of global GDP under a similar standard.

Whether this emerging standard helps or harms United States firms will be determined by whether the United States enacts and actively enforces robust federal privacy laws. Unless there is a federal GDPR-like law in the United States, there may be a degree of friction and the potential of isolation for United States companies.

However, there is an opportunity in this trend. The United States can assume the global lead by doing two things. First, enact a federal law that borrows from the GDPR, including a comprehensive definition of “personal data”, and robust “purpose specification”. Second, invest in world-leading regulation that pursues test cases, and defines practical standards. Cutting edge enforcement of common principles-based standards is de facto leadership.

Second, privacy and antitrust law are moving closer to each other, and might squeeze big tech companies very tightly indeed.

Big tech companies “cross-use” user data from one part of their business to prop up others. The result is that a company can leverage all the personal information accumulated from its users in one line of business, and for one purpose, to dominate other lines of business too.

This is likely to have anti-competitive effects. Rather than competing on the merits, the company can enjoy the unfair advantage of massive network effects even though it may be starting from scratch in a new line of business. This stifles competition and hurts innovation and consumer choice.

Antitrust authorities in other jurisdictions have addressed this. In 2015, the Belgian National Lottery was fined for re-using personal information acquired through its monopoly for a different, and incompatible, line of business.

As lawmakers and regulators in Europe and in the United States start to think of “purpose specification” as a tool for anti-trust enforcement, tech giants should beware.


John Miller

John Miller is the VP for Global Policy and Law at the Information Technology Industry Council (ITI), a D.C. based advocate group for the high tech sector.  Miller leads ITI’s work on cybersecurity, privacy, surveillance, and other technology and digital policy issues.

Data has long been the lifeblood of innovation. And protecting that data remains a priority for individuals, companies and governments alike. However, as times change and innovation progresses at a rapid rate, it’s clear the laws protecting consumers’ data and privacy must evolve as well.

“Data has long been the lifeblood of innovation. And protecting that data remains a priority for individuals, companies and governments alike.”

As the global regulatory landscape shifts, there is now widespread agreement among business, government, and consumers that we must modernize our privacy laws, and create an approach to protecting consumer privacy that works in today’s data-driven reality, while still delivering the innovations consumers and businesses demand.

More and more, lawmakers and stakeholders acknowledge that an effective privacy regime provides meaningful privacy protections for consumers regardless of where they live. Approaches, like the framework ITI released last fall, must offer an interoperable solution that can serve as a model for governments worldwide, providing an alternative to a patchwork of laws that could create confusion and uncertainty over what protections individuals have.

Companies are also increasingly aware of the critical role they play in protecting privacy. Looking ahead, the tech industry will continue to develop mechanisms to hold us accountable, including recommendations that any privacy law mandate companies identify, monitor, and document uses of known personal data, while ensuring the existence of meaningful enforcement mechanisms.


Nuala O’Connor

Nuala O’Connor is president and CEO of the Center for Democracy & Technology, a global nonprofit committed to the advancement of digital human rights and civil liberties, including privacy, freedom of expression, and human agency. O’Connor has served in a number of presidentially appointed positions, including as the first statutorily mandated chief privacy officer in U.S. federal government when she served at the U.S. Department of Homeland Security. O’Connor has held senior corporate leadership positions on privacy, data, and customer trust at Amazon, General Electric, and DoubleClick. She has practiced at several global law firms including Sidley Austin and Venable. She is an advocate for the use of data and internet-enabled technologies to improve equity and amplify marginalized voices.

For too long, Americans’ digital privacy has varied widely, depending on the technologies and services we use, the companies that provide those services, and our capacity to navigate confusing notices and settings.

“Americans deserve comprehensive protections for personal information – protections that can’t be signed, or check-boxed, away.”

We are burdened with trying to make informed choices that align with our personal privacy preferences on hundreds of devices and thousands of apps, and reading and parsing as many different policies and settings. No individual has the time nor capacity to manage their privacy in this way, nor is it a good use of time in our increasingly busy lives. These notices and choices and checkboxes have become privacy theater, but not privacy reality.

In 2019, the legal landscape for data privacy is changing, and so is the public perception of how companies handle data. As more information comes to light about the effects of companies’ data practices and myriad stewardship missteps, Americans are surprised and shocked about what they’re learning. They’re increasingly paying attention, and questioning why they are still overburdened and unprotected. And with intensifying scrutiny by the media, as well as state and local lawmakers, companies are recognizing the need for a clear and nationally consistent set of rules.

Personal privacy is the cornerstone of the digital future people want. Americans deserve comprehensive protections for personal information – protections that can’t be signed, or check-boxed, away. The Center for Democracy & Technology wants to help craft those legal principles to solidify Americans’ digital privacy rights for the first time.


Chris Baker

Chris Baker is Senior Vice President and General Manager of EMEA at Box.

Last year saw data privacy hit the headlines as businesses and consumers alike were forced to navigate the implementation of GDPR. But it’s far from over.

“…customers will have trust in a business when they are given more control over how their data is used and processed”

2019 will be the year that the rest of the world catches up to the legislative example set by Europe, as similar data regulations come to the forefront. Organizations must ensure they are compliant with regional data privacy regulations, and more GDPR-like policies will start to have an impact. This can present a headache when it comes to data management, especially if you’re operating internationally. However, customers will have trust in a business when they are given more control over how their data is used and processed, and customers can rest assured knowing that no matter where they are in the world, businesses must meet the highest bar possible when it comes to data security.

Starting with the U.S., 2019 will see larger corporations opt-in to GDPR to support global business practices. At the same time, local data regulators will lift large sections of the EU legislative framework and implement these rules in their own countries. 2018 was the year of GDPR in Europe, and 2019 be the year of GDPR globally.


Christopher Wolf

Christopher Wolf is the Founder and Chair of the Future of Privacy Forum think tank, and is senior counsel at Hogan Lovells focusing on internet law, privacy and data protection policy.

With the EU GDPR in effect since last May (setting a standard other nations are emulating),

“Regardless of the outcome of the debate over a new federal privacy law, the issue of the privacy and protection of personal data is unlikely to recede.”

with the adoption of a highly-regulatory and broadly-applicable state privacy law in California last Summer (and similar laws adopted or proposed in other states), and with intense focus on the data collection and sharing practices of large tech companies, the time may have come where Congress will adopt a comprehensive federal privacy law. Complicating the adoption of a federal law will be the issue of preemption of state laws and what to do with the highly-developed sectoral laws like HIPPA and Gramm-Leach-Bliley. Also to be determined is the expansion of FTC regulatory powers. Regardless of the outcome of the debate over a new federal privacy law, the issue of the privacy and protection of personal data is unlikely to recede.


Source: The Tech Crunch

Read More

Moglix raises $23M to digitize India’s manufacturing supply chain

Posted by on Dec 18, 2018 in Accel Partners, Amazon, Asia, chairman, E-Commerce, eCommerce, Flipkart, funding, Fundings & Exits, IFC, India, innoven capital, jungle ventures, Moglix, online payments, ratan tata, SaaS, series C, Singapore, temasek, Walmart, World Bank | 0 comments

We hear a lot about India’s e-commerce battle between Walmart, which bought Flipkart for $17 billion, and Amazon. But over in the B2B space, Moglix — an e-commerce service for buying manufacturing products that’s been making strides — today it announced a $23 million Series C round ahead of a bigger round and impending global expansion.

This new round was led by some impressive names that Moglix counts as existing investors: Accel Partners, Jungle Ventures and World Bank-affiliated IFC. Other returning backers that partook include Venture Highway, ex-Twitter VP Shailesh Rao and InnoVen Capital, a venture debt fund affiliated with Singapore’s Temasek. The startup also counts Ratan Tata — the former chairman of manufacturing giant Tata Sons — Singapore’s SeedPlus and Rocketship on its cap table.

Founded in 2015 by former Googler Rahul Garg, Moglix connects manufacturing OEMs and their resellers with business buyers. Garg told TechCrunch last year that it is named after the main character in The Jungle Book series in order to “bring global standards to the Indian manufacturing sector.” The country accounts for 90 percent of its transactions, but the startup is also focused on global opportunities.

“The entire B2B commerce industry in India will move to a transactional model,” Garg told us in an interview this week. He sees a key role in bringing about the same impact Amazon had on consumer e-commerce.

“We think there’s an opportunity to start from a blank sheet and rewrite how B2B transactions should be done in the country,” he added. “The entire supply chain has been pretty much offline and fragmented.”

In a little over three years, Moglix has raced to its Series C round with rapid expansion that has seen it grow to 10 centers in India with a retail base that covers over 5,000 suppliers and supplying SMEs.

Yet, despite that, Garg has kept things lean as the company has raised just $41 million across those rounds, including a $12 million Series B last year, with under 500 staff. However, Moglix is laying the foundations for what he expects will be a much larger fundraising round next year that will see the company go after international opportunities.

“This [new] round is about doubling, tripling, down on India but also establishing a seed in a couple of countries we are looking at,” Garg said.

Moglix aims to make the B2B online buying experience as intuitive and user-friendly as e-commerce sites are for consumers

Adding further color, he explained that Moglix will expand its Saas procurement service, which helps digitize B2B purchasing, to 100 markets worldwide as part of its global vision. While that service does have tie-ins with the Moglix platform, it also allows any customer to bring their existing sales channels into a digital environment, therein preparing them to get their needs online, ideally with Moglix. That service is currently available in eight countries, Garg confirmed.

Beyond making connections on the buying side, Moglix also works with major OEM brands and their key resellers. The basic pitch is the benefits of digital commerce data — detailed information on what your target customers buy or browser — as well as the strength of Moglix’s distribution system, tighter fraud prevention and that aforementioned digital revolution.

“Brands have started to realize [that digital] will be a very important channel and that they need to use both [online and offline] for crafting their distribution,” explained Garg.

Indeed, a much-cited SPO India report forecasts that B2B in India is currently a $300 billion a year market that is poised to reach $700 billion by 2020. Garg estimates that his company has a 0.5 percent market share within its manufacturing niche. Over the coming five years, he said he believes that it can reach double-digit percent.

While it may not be as sexy as consumer commerce, stronger unit economics — thanks to a large part to different buying dynamics of business customers, who are less swayed by discounts — make the space something to keep an eye on as India’s digital development continues. Already, Garg paid credit to GST — the move to digitize taxation — as a key development that has aided his company.

“GST enabled good trust and accelerated everything by 2/3X,” he said.

There might yet be further boons as the Indian government chases its strategy of becoming a global manufacturing hub.


Source: The Tech Crunch

Read More

Why you need a supercomputer to build a house

Posted by on Dec 8, 2018 in affordable housing, Artificial Intelligence, building, building codes, buildings, camino, concur, concur labs, Cove.Tool, cover, Cover Technologies, Developer, Enterprise, envelope, Government, GreenTech, housing, Logistics, machine learning, Policy, Real Estate, regulation, SaaS, Startups, TC, zoning | 0 comments

When the hell did building a house become so complicated?

Don’t let the folks on HGTV fool you. The process of building a home nowadays is incredibly painful. Just applying for the necessary permits can be a soul-crushing undertaking that’ll have you running around the city, filling out useless forms, and waiting in motionless lines under fluorescent lights at City Hall wondering whether you should have just moved back in with your parents.

Consider this an ongoing discussion about Urban Tech, its intersection with regulation, issues of public service, and other complexities that people have full PHDs on. I’m just a bitter, born-and-bred New Yorker trying to figure out why I’ve been stuck in between subway stops for the last 15 minutes, so please reach out with your take on any of these thoughts: @Arman.Tabatabai@techcrunch.com.

And to actually get approval for those permits, your future home will have to satisfy a set of conditions that is a factorial of complex and conflicting federal, state and city building codes, separate sets of fire and energy requirements, and quasi-legal construction standards set by various independent agencies.

It wasn’t always this hard – remember when you’d hear people say “my grandparents built this house with their bare hands?” These proliferating rules have been among the main causes of the rapidly rising cost of housing in America and other developed nations. The good news is that a new generation of startups is identifying and simplifying these thickets of rules, and the future of housing may be determined as much by machine learning as woodworking.

When directions become deterrents

Photo by Bill Oxford via Getty Images

Cities once solely created the building codes that dictate the requirements for almost every aspect of a building’s design, and they structured those guidelines based on local terrain, climates and risks. Over time, townships, states, federally-recognized organizations and independent groups that sprouted from the insurance industry further created their own “model” building codes.

The complexity starts here. The federal codes and independent agency standards are optional for states, who have their own codes which are optional for cities, who have their own codes that are often inconsistent with the state’s and are optional for individual townships. Thus, local building codes are these ever-changing and constantly-swelling mutant books made up of whichever aspects of these different codes local governments choose to mix together. For instance, New York City’s building code is made up of five sections, 76 chapters and 35 appendices, alongside a separate set of 67 updates (The 2014 edition is available as a book for $155, and it makes a great gift for someone you never want to talk to again).

In short: what a shit show.

Because of the hyper-localized and overlapping nature of building codes, a home in one location can be subject to a completely different set of requirements than one elsewhere. So it’s really freaking difficult to even understand what you’re allowed to build, the conditions you need to satisfy, and how to best meet those conditions.

There are certain levels of complexity in housing codes that are hard to avoid. The structural integrity of a home is dependent on everything from walls to erosion and wind-flow. There are countless types of material and technology used in buildings, all of which are constantly evolving.

Thus, each thousand-page codebook from the various federal, state, city, township and independent agencies – all dictating interconnecting, location and structure-dependent needs – lead to an incredibly expansive decision tree that requires an endless set of simulations to fully understand all the options you have to reach compliance, and their respective cost-effectiveness and efficiency.

So homebuilders are often forced to turn to costly consultants or settle on designs that satisfy code but aren’t cost-efficient. And if construction issues cause you to fall short of the outcomes you expected, you could face hefty fines, delays or gigantic cost overruns from redesigns and rebuilds. All these costs flow through the lifecycle of a building, ultimately impacting affordability and access for homeowners and renters.

Startups are helping people crack the code

Photo by Caiaimage/Rafal Rodzoch via Getty Images

Strap on your hard hat – there may be hope for your dream home after all.

The friction, inefficiencies, and pure agony caused by our increasingly convoluted building codes have given rise to a growing set of companies that are helping people make sense of the home-building process by incorporating regulations directly into their software.

Using machine learning, their platforms run advanced scenario-analysis around interweaving building codes and inter-dependent structural variables, allowing users to create compliant designs and regulatory-informed decisions without having to ever encounter the regulations themselves.

For example, the prefab housing startup Cover is helping people figure out what kind of backyard homes they can design and build on their properties based on local zoning and permitting regulations.

Some startups are trying to provide similar services to developers of larger scale buildings as well. Just this past week, I covered the seed round for a startup called Cove.Tool, which analyzes local building energy codes – based on location and project-level characteristics specified by the developer – and spits out the most cost-effective and energy-efficient resource mix that can be built to hit local energy requirements.

And startups aren’t just simplifying the regulatory pains of the housing process through building codes. Envelope is helping developers make sense of our equally tortuous zoning codes, while Cover and companies like Camino are helping steer home and business-owners through arduous and analog permitting processes.

Look, I’m not saying codes are bad. In fact, I think building codes are good and necessary – no one wants to live in a home that might cave in on itself the next time it snows. But I still can’t help but ask myself why the hell does it take AI to figure out how to build a house? Why do we have building codes that take a supercomputer to figure out?

Ultimately, it would probably help to have more standardized building codes that we actually clean-up from time-to-time. More regional standardization would greatly reduce the number of conditional branches that exist. And if there was one set of accepted overarching codes that could still set precise requirements for all components of a building, there would still only be one path of regulations to follow, greatly reducing the knowledge and analysis necessary to efficiently build a home.

But housing’s inherent ties to geography make standardization unlikely. Each region has different land conditions, climates, priorities and political motivations that cause governments to want their own set of rules.

Instead, governments seem to be fine with sidestepping the issues caused by hyper-regional building codes and leaving it up to startups to help people wade through the ridiculousness that paves the home-building process, in the same way Concur aids employee with infuriating corporate expensing policies.

For now, we can count on startups that are unlocking value and making housing more accessible, simpler and cheaper just by making the rules easier to understand. And maybe one day my grandkids can tell their friends how their grandpa built his house with his own supercomputer.

And lastly, some reading while in transit:


Source: The Tech Crunch

Read More

Zizoo, a booking.com for boats, sails for new markets with $7.4M on board

Posted by on Nov 22, 2018 in Axel Springer Digital Ventures, berlin, Booking.com, Europe, founders fund, Fundings & Exits, millennials, Revo Capital, SaaS, sailing, Startups, Zizoo | 0 comments

Berlin-based Zizoo — a startup which self describes as booking.com for boats — has nabbed a €6.5 million (~$7.4M) Series A to help more millennials find holiday yachts to mess about taking selfies in.

Zizoo says its Series A — which was led by Revo Capital, with participation from new investors including Coparion, Check24 Ventures and PUSH Ventures — was “significantly oversubscribed”.

Existing investors including MairDumont Ventures, aws Founders Fund, Axel Springer Digital Ventures and Russmedia International also participated in the round.

We first came across Zizoo some three years ago when they won our pitching competition in Budapest.

We’re happy to say they’ve come a long way since, with a team that’s now 60-people strong, and business relationships with ~1,500 charter companies — serving up more than 21,000 boats for rent, across 30 countries, via a search and book platform that caters to a full range of “sailing experiences”, from experienced sailor to novice and, on the pricing front, luxury to budget.

Registered users passed the 100,000 mark this year, according to founder and CEO Anna Banicevic. She also tells us that revenue growth has been 2.5x year-on-year for the past three years.

Commenting on the Series A in a statement, Revo Capital’s managing director Cenk Bayrakdar said: “The yacht charter market is one of the most underserved verticals in the travel industry despite its huge potential. We believe in Zizoo’s successful future as a leading SaaS-enabled marketplace.”

The new funds will be put towards growing the business — including by expanding into new markets; plus product development and recruitment across the board.

Zizoo founder and CEO Anna Banicevic at its Berlin offices

“We’re looking to strengthen our presence in the US, where we’ve seen the biggest YoY growth while also expand our inventory in hot locations such as Greece, Spain and the Caribbean,” says Banicevic on market expansion. “We will also be aggressively pushing markets such as France and Spain where consumers show a growing interest in boat holidays.”

Zizoo is intending to hire 40 more employees over the course of the next year — to meet what it dubs “the booming demand for sailing experiences, especially among millennials”.

So why do millennials love boating holidays so much? Zizoo says the 20-40 age range makes up the “majority” of its customer.

Banicevic reckons the answer is they’re after a slice of ‘affordable luxury’.

“After the recent boom of the cruising industry, millennials are well familiar with the concept of holidays at sea. However, sailing holidays (yachting) are much more fitting to the millennial’s strive for independence, adventure and experiences off the beaten path,” she suggests.

“Yachting is a growing trend no longer reserved for the rich and famous — and millennials want a piece of that. On our platform, users can book a boat holiday for as low as £25 per person per night (this is an example of a sailboat in Croatia).”

On the competition front, she says the main competition is the offline sphere (“where 90% of business is conducted by a few large and many small travel agents”).

But a few rival platforms have emerged “in the last few years” — and here she reckons Zizoo has managed to outgrow the startup competition “thanks to our unique vertically integrated business model, offering suppliers a booking management system and making it easy for the user to book a boat holiday”.


Source: The Tech Crunch

Read More

Early-stage SaaS VC slip snaps recovery as public software stocks soar

Posted by on Oct 20, 2018 in Column, SaaS, TC, Venture Capital | 0 comments

A few months ago, Crunchbase News reported that a longstanding period of SaaS investment stagnation had come to an end.

However, the investment boom times didn’t necessarily carry over to the seed and early-stage end of the subscription software businesses.

The chart below displays deal and dollar volume of seed and early-stage venture investments1 made into companies from around the world in Crunchbase’s SaaS category. Note that it is subject to historically documented reporting delays, which are most pronounced in seed and early-stage deals.

As can be plainly seen that Q3 2018 took quite a turn in terms of investment into SaaS. And it’s a bit bewildering as to why.

Overall, the venture market in Q3 hit record heights, and nearly every stage of investment saw more dollars and more rounds. Yet, as shown above, SaaS startups don’t appear to be beneficiaries of this influx of cash.

The public comparison

The picture becomes even more distorted when we account for public market SaaS comps, which set the benchmark for private companies. And that benchmark hasn’t been suffering. Public cloud companies have enjoyed a steep run up in asset value over the past several years.

The newly revamped BVP Nasdaq Emerging Cloud Index (formerly known as the Bessemer Cloud Index) tracks a basket of publicly traded SaaS stocks, including the likes of SalesforceAdobe and more recent debuts like DropboxDocuSign and Okta, among others.

Public cloud stocks soar

Public companies in the Bessemer Cloud Index grew their public valuations much faster than more broad-based indices like the Dow Jones Industrial Average and the S&P 500. Carried by the high and still-growing value of recurring revenuewarm reception of SaaS companies new to public markets and (with the exception of the past couple of weeks) generally stable markets overall, public SaaS companies have done well. Despite a pretty absurd rate of growth on the public side, no such consistent growth could be found on the early-stage, private end of the market.

However, rather than viewing Q3 2018 as a disappointment for the early-stage SaaS investment market, it’s more like a reversion to the mean. It’s the first half of the year that’s the outlier, not Q3.

Big deals, slowing pace

The first half of 2018 had some truly huge early-stage deals cross the wires. In March, Robotic process automation software company UiPath raised $153 million in its Series B. (UiPath just raised another $225 million in a Series C round in September.) Collaborative email inbox Front App raised $66 million in its January Series B. Rival Chicago logistics software companies FourKites and project44 each raised $35 million Series B rounds earlier in the year. On a one-off basis, these are big rounds, but collectively they add up to a huge pile of money.

The conclusion we’re drawn to here is that we were perhaps premature in declaring the long-time downtrend snapped to the upside.

  1. On the seed-stage side, that includes pre-seed, seed and angel rounds, as well as smaller convertible notes and proceeds from small equity crowdfunding campaigns. Early-stage deals include Series A and Series B rounds, as well as larger convertible notes and equity crowdfunding campaigns.


Source: The Tech Crunch

Read More

Integrate.ai pulls in $30M to help businesses make better customer-centric decisions

Posted by on Sep 12, 2018 in Advertising Tech, Artificial Intelligence, bias, business intelligence, Canada, deep learning, ethics, Facebook, fairness, Fundings & Exits, Georgian Partners, InfoSum, Integrate.ai, machine learning, Portag3 Ventures, Privacy, Real Ventures, SaaS, social web, TC, toronto | 3 comments

Helping businesses bring more firepower to the fight against AI-fuelled disruptors is the name of the game for Integrate.ai, a Canadian startup that’s announcing a $30M Series A today.

The round is led by Portag3 Ventures . Other VCs include Georgian Partners, Real Ventures, plus other (unnamed) individual investors also participating. The funding will be used for a big push in the U.S. market.

Integrate.ai’s early focus has been on retail banking, retail and telcos, says founder Steve Irvine, along with some startups which have data but aren’t necessarily awash with AI expertise to throw at it. (Not least because tech giants continue to hoover up talent.)

Its SaaS platform targets consumer-centric businesses — offering to plug paying customers into a range of AI technologies and techniques to optimize their decision-making so they can respond more savvily to their customers. Aka turning “high volume consumer funnels” into “flywheels”, if that’s a mental image that works for you.

In short it’s selling AI pattern spotting insights as a service via a “cloud-based AI intelligence platform” — helping businesses move from “largely rules-based decisioning” to “more machine learning-based decisioning boosted by this trusted signals exchange of data”, as he puts it.

Irvine gives the example of a large insurance aggregator the startup is working with to optimize the distribution of gift cards and incentive discounts to potential customers — with the aim of maximizing conversions.

“Obviously they’ve got a finite amount of budget for those — they need to find a way to be able to best deploy those… And the challenge that they have is they don’t have a lot of information on people as they start through this funnel — and so they have what is a classic ‘cold start’ problem in machine learning. And they have a tough time allocating those resources most effectively.”

“One of the things that we’ve been able to help them with is to, essentially, find the likelihood of those people to be able to convert earlier by being able to bring in some interesting new signal for them,” he continues. “Which allows them to not focus a lot of their revenue or a lot of those incentives on people who either have a low likelihood of conversion or are most likely to convert. And they can direct all of those resources at the people in the middle of the distribution — where that type of a nudge, that discount, might be the difference between them converting or not.”

He says feedback from early customers suggests the approach has boosted profitability by around 30% on average for targeted business areas — so the pitch is businesses are easily seeing the SaaS easily paying for itself. (In the cited case of the insurer, he says they saw a 23% boost in performance — against what he couches as already “a pretty optimized funnel”.)

“We find pretty consistent [results] across a lot of the companies that we’re working with,” he adds. “Most of these decisions today are made by a CRM system or some other more deterministic software system that tends to over attribute people that are already going to convert. So if you can do a better job of understanding people’s behaviour earlier you can do a better job at directing those resources in a way that’s going to drive up conversion.”

The former Facebook marketing exec, who between 2014 and 2017 ran a couple of global marketing partner programs at Facebook and Instagram, left the social network at the start of last year to found the business — raising $9.6M in seed funding in two tranches, according to Crunchbase.

The eighteen-month-old Toronto based AI startup now touts itself as one of the fastest growing companies in Canadian history, with a headcount of around 40 at this point, and a plan to grow staff 3x to 4x over the next 12 months. Irvine is also targeting growing revenue 10x, with the new funding in place — gunning to carve out a leadership position in the North American market.

One key aspect of Integrate.ai’s platform approach means its customers aren’t only being helped to extract more and better intel from their own data holdings, via processes such as structuring the data for AI processing (though Irvine says it’s also doing that).

The idea is they also benefit from the wider network, deriving relevant insights across Integrate.ai’s pooled base of customers — in a way that does not trample over privacy in the process. At least, that’s the claim.

(It’s worth noting Integrate.ai’s network is not a huge one yet, with customers numbering in the “tens” at this point — the platform only launched in alpha around 12 months ago and remains in beta now. Named customers include the likes of Telus, Scotiabank, and Corus.)

So the idea is to offer an alternative route to boost business intelligence vs the “traditional” route of data-sharing by simply expanding databases — because, as Irvine points out, literal data pooling is “coming under fire right now — because it is not in the best interests, necessarily, of consumers; there’s some big privacy concerns; there’s a lot of security risk which we’re seeing show up”.

What exactly is Integrate.ai doing with the data then? Irvine says its Trusted Signals Exchange platform uses some “pretty advanced techniques in deep learning and other areas of machine learning to be able to transfer signals or insights that we can gain from different companies such that all the companies on our platform can benefit by delivering more personalized, relevant experiences”.

“But we don’t need to ever, kind of, connect data in a more traditional way,” he also claims. “Or pull personally identifiable information to be able to enable it. So it becomes very privacy-safe and secure for consumers which we think is really important.”

He further couches the approach as “pretty unique”, adding it “wouldn’t even have been possible probably a couple of years ago”.

From Irvine’s description the approach sounds similar to the data linking (via mathematical modelling) route being pursued by another startup, UK-based InfoSum — which has built a platform that extracts insights from linked customer databases while holding the actual data in separate silos. (And InfoSum, which was founded in 2016, also has a founder with a behind-the-scenes’ view on the inners workings of the social web — in the form of Datasift’s Nic Halstead.)

Facebook’s own custom audiences product, which lets advertisers upload and link their customer databases with the social network’s data holdings for marketing purposes is the likely inspiration behind all these scenes.

Irvine says he spotted the opportunity to build this line of business having been privy to a market overview in his role at Facebook, meeting with scores of companies in his marketing partner role and getting to hear high level concerns about competing with tech giants. He says the Facebook job also afforded him an overview on startup innovation — and there he spied a gap for Integrate.ai to plug in.

“My team was in 22 offices around the world, and all the major tech hubs, and so we got a chance to see any of the interesting startups that were getting traction pretty quickly,” he tells TechCrunch. “That allowed us to see the gaps that existed in the market. And the biggest gap that I saw… was these big consumer enterprises needed a way to use the power of AI and needed access to third party data signals or insights to be able to enabled them to transition to this more customer-centric operating model to have any hope of competing with the large digital disruptors like Amazon.

“That was kind of the push to get me out of Facebook, back from California to Toronto, Canada, to start this company.”

Again on the privacy front, Irvine is a bit coy about going into exact details about the approach. But is unequivocal and emphatic about how ad tech players are stepping over the line — having seen into that pandora’s box for years — so his rational to want to do things differently at least looks clear.

“A lot of the techniques that we’re using are in the field of deep learning and transfer learning,” he says. “If you think about the ultimate consumer of this data-sharing, that is insight sharing, it is at the end these AI systems or models. Meaning that it doesn’t need to be legible to people as an output — all we’re really trying to do is increase the map; make a better probabilistic decision in these circumstances where we might have little data or not the right data that we need to be able to make the right decision. So we’re applying some of the newer techniques in those areas to be able to essentially kind of abstract away from some of the more sensitive areas, create representations of people and patterns that we see between businesses and individuals, and then use that as a way to deliver a more personalized predictions — without ever having to know the individual’s personally identifiable information.”

“We do do some work with differential privacy,” he adds when pressed further on the specific techniques being used. “There’s some other areas that are just a little bit more sensitive in terms of the work that we’re doing — but a lot of work around representative learning and transfer learning.”

Integrate.ai has published a whitepaper — for a framework to “operationalize ethics in machine learning systems” — and Irvine says it’s been called in to meet and “share perspectives” with regulators based on that.

“I think we’re very GDPR-friendly based on the way that we have thought through and constructed the platform,” he also says when asked whether the approach would be compliant with the European Union’s tough new privacy framework (which also places some restrictions on entirely automated decisions when they could have a significant impact on individuals).

“I think you’ll see GDPR and other regulations like that push more towards these type of privacy preserving platforms,” he adds. “And hopefully away from a lot of the really creepy, weird stuff that is happening out there with consumer data that I think we all hope gets eradicated.”

For the record, Irvine denies any suggestion that he was thinking of his old employer when he referred to “creepy, weird stuff” done with people’s data — saying: “No, no, no!”

“What I did observe when I was there in ad tech in general, I think if you look at that landscape, I think there are many, many… worse examples of what is happening out there with data than I think the ones that we’re seeing covered in the press. And I think as the light shines on more of that ecosystem of players, I think we will start to see that the ways they’ve thought about data, about collection, permissioning, usage, I think will change drastically,” he adds.

“And the technology is there to be able to do it in a much more effective way without having to compromise results in too big a way. And I really hope that that sea change has already started — and I hope that it continues at a much more rapid pace than we’ve seen.”

But while privacy concerns might be reduced by the use of an alternative to traditional data-pooling, depending on the exact techniques being used, additional ethical considerations are clearly being dialled sharply into view if companies are seeking to supercharge their profits by automating decision making in sensitive and impactful areas such as discounts (meaning some users stand to gain more than others).

The point is an AI system that’s expert at spotting the lowest hanging fruit (in conversion terms) could start selectively distributing discounts to a narrow sub-section of users only — meaning other people might never even be offered discounts.

In short, it risks the platform creating unfair and/or biased outcomes.

Integrate.ai has recognized the ethical pitfalls, and appears to be trying to get ahead of them — hence its aforementioned ‘Responsible AI in Consumer Enterprise’ whitepaper.

Irvine also says that raising awareness around issues of bias and “ethical AI” — and promoting “more responsible use and implementation” of its platform is another priority over the next twelve months.

“The biggest concern is the unethical treatment of people in a lot of common, day-to-day decisions that companies are going to be making,” he says of problems attached to AI. “And they’re going to do it without understanding, and probably without bad intent, but the reality is the results will be the same — which is perpetuating a lot of biases and stereotypes of the past. Which would be really unfortunate.

“So hopefully we can continue to carve out a name, on that front, and shift the industry more to practices that we think are consistent with the world that we want to live in vs the one we might get stuck in.”

The whitepaper was produced by a dedicated internal team, which he says focuses on AI ethics and fairness issues, and is headed up by VP of product & strategy, Kathryn Hume.

“We’re doing a lot of research now with the Vector Institute for AI… on fairness in our AI models, because what we’ve seen so far is that — if left unattended, if all we did was run these models and not adjust for some of the ethical considerations — we would just perpetuate biases that we’ve seen in the historical data,” he adds.

“We would pick up patterns that are more commonly associated with maybe reinforcing particular stereotypes… so we’re putting a really dedicated effort — probably abnormally large, given our size and stage — towards leading in this space, and making sure that that’s not the outcome that gets delivered through effective use of a platform like ours. But actually, hopefully, the total opposite: You have a better understanding of where those biases might creep in and they could be adjusted for in the models.”

Combating unfairness in this type of AI tool would mean a company having to optimize conversion performance a bit less than it otherwise could.

Though Irvine suggests that’s likely just in the short term. Over the longer term he argues you’re laying the foundations for greater growth — because you’re building a more inclusive business, saying: “We have this conversational a lot. “I think it’s good for business, it’s just the time horizon that you might think about.”

“We’ve got this window of time right now, that I think is a really precious window, where people are moving over from more deterministic software systems to these more probabilistic, AI-first platforms… They just operate much more effectively, and they learn much more effectively, so there will be a boost in performance no matter what. If we can get them moved over right off the bat onto a platform like ours that has more of an ethical safeguard, then they won’t notice a drop off in performance — because it’ll actually be better performance. Even if it’s not optimized fully for short term profitability,” he adds.

“And we think, over the long term it’s just better business if you’re socially conscious, ethical company. We think, over time, especially this new generation of consumers, they start to look out for those things more… So we really hope that we’re on the right side of this.”

He also suggests that the wider visibility afforded by having AI doing the probabilistic pattern spotting (vs just using a set of rules) could even help companies identify unfairnesses they don’t even realize might be holding their businesses back.

“We talk a lot about this concept of mutual lifetime value — which is how do we start to pull in the signals that show that people are getting value in being treated well, and can we use those signals as part of the optimization. And maybe you don’t have all the signal you need on that front, and that’s where being able to access a broader pool can actually start to highlight those biases more.”


Source: The Tech Crunch

Read More

New Knowledge just raised $11 million more to flag and fight social media disinformation meant to bring down companies

Posted by on Aug 28, 2018 in Artificial Intelligence, ggv capital, New Knowledge, Recent Funding, SaaS, Startups, TC | 0 comments

Back in January, we told you about a young, Austin, Tex.-based startup that fights online disinformation for corporate customers. Turns out we weren’t alone in finding it interesting. The now four-year-old, 40-person outfit, New Knowledge, just sealed up $11 million in new funding led by the cross-border venture firm GGV Capital, with participation from Lux Capital. GGV had also participated in the company’s $1.9 million seed round.

We talked yesterday with co-founder and CEO Jonathon Morgan and the company’s director of research, Renee DiResta, to learn more about its work, which appears to be going well. (They say revenue has grown 1,000 percent over last year.) Our conversation, edited for length, follows.

TC: A lot of people associate coordinated manipulation by bad actors online with trying to disrupt elections here in the U.S. or with pro-government agendas elsewhere, but you’re working with companies that are also battling online propaganda. Who are some of them?

JM: Election interference is just the tip of the iceberg in terms of social media manipulation. Our customers are a little sensitive about being identified, but they are Fortune 100 companies in the entertainment industry, as well as consumer brands. We also have national security customers, though most of our business comes from the private sector.

TC: Renee, just a few weeks ago, you testified before the Senate Intelligence Committee about how social media platforms have enabled foreign-influence operations against the United States. What was that like?

RD: It was a great opportunity to educate the public on what happens and to speak directly to the senators about the need for government to be more proactive and to establish a deterrent strategy because [these disinformation campaigns] aren’t impacting just our elections but our society and American industry.

TC: How do companies typically get caught up in these similar practices?

JM: It’s pretty typical for consumer-facing brands, because they are so high-profile, to get involved in quasi-political conversations, whether or not they like it. Communities that know how to game the system will come after them over a pro-immigration stance for example. They mobilize and use the same black market social media content providers, the same tools and tactics that are used by Russia and Iran and other bad actors.

TC: In other words, this is about ideology, not financial gain.

JM: Where we see this more for financial gain is when it involves state intelligence agencies trying to undermine companies where they have nationalized an industry that competes with U.S. institutions like oil and gas and agriculture companies. You can see this is the promotion of anti-GMO narratives, for example. Agricultural tech in the U.S. is a big business, and on the fringes, there’s some debate about whether GMOs are safe to eat, even though the scientific community is clear that they’re completely safe.

Meanwhile, there are documented examples of groups aligned with Russian intelligence using purchased social media to circulate conspiracy theories and manipulate the public conversation about GMOs. They find a grain of truth in a scientific article, then misrepresent the findings through quasi-legitimate outlets, Facebook pages and Twitter accounts that are in turn amplified by social media automation.

TC: So you’re selling software-as-a-service that does what exactly?

JM: We have a SaaS product and a team of analysts who come out of the intelligence community and who help customers understand threats to their brand. It’s an AI-driven system that detects subtle social signs of manipulation across accounts. We then help the companies understand who is targeting them, why, and what they can do about it.

TC: Which is what?

JM: First, they can’t be blindsided. Many can’t tell the difference between real and manufactured public outcry, so they don’t even know about it when it’s happening. But there’s a pretty predictable set of tactics that are used to create false public perception. They plant a seed with accounts they control directly that can look quasi-legitimate. Then they amplify it via paid automation, and they target specific individuals who may have an interest in what they have to say. The thinking is that if they can manipulate these microinfluencers, they’ll amplify the message by sharing it with their followers. By then, you can’t put the cat back in the bag.  You need to identify [these campaigns] when they’ve lit the match, but haven’t yet started a fire.

At the early stage, we can provide information to social media platforms to determine if what’s going on is acceptable within their policies. Longer term, we’re trying to find consensus between governments and also social media platforms themselves over what is and what isn’t acceptable — what’s aggressive conversation on these platforms and what’s out of bounds.

TC: How can you work with them when they can’t even decide on their own policies?

JM: First, different platforms are used for different reasons. You see peer-to-peer disinformation, where a small group of accounts drives a malicious narrative on Facebook, which can be problematic at the very local level. Twitter is the platform where media gets its pulse on what’s happening, so attacks launched on Twitter are much more likely to be made into mainstream opinion. There are also a lot of disinformation campaigns on Reddit, but those conversations are less likely to be elevated into a topic on CNN, even while they can shape the opinions of large numbers of avid users. Then there are the off-brand platforms like 4chan, where a lot of these campaigns are born. They are all susceptible in different ways.

The platforms have been very receptive. They take these campaigns much more seriously than when they first began looking at election integrity. But platforms are increasingly evolving from more open to more closed spaces, whether it’s WhatsApp groups or private Discord channels or private Facebook channels, and that’s making it harder for the platforms to observe. It’s also making it harder for outsiders who are interested in how these campaigns evolve.


Source: The Tech Crunch

Read More