Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Index Ventures, Stripe back bookkeeping service Pilot with $40M

Posted by on Apr 18, 2019 in computing, Dropbox, Finance, funding, Index Ventures, jessica mckellar, ksplice, linux, MIT, oracle, San Francisco, Software, Startup company, Startups, stripe, Waseem Daher, zulip | 0 comments

Five years after Dropbox acquired their startup Zulip, Waseem Daher, Jeff Arnold and Jessica McKellar have gained traction for their third business together: Pilot.

Pilot helps startups and small businesses manage their back office. Chief executive officer Daher admits it may seem a little boring, but the market opportunity is undeniably huge. To tackle the market, Pilot is today announcing a $40 million Series B led by Index Ventures with participation from Stripe, the online payment processing system.

The round values Pilot, which has raised about $60 million to date, at $355 million.

“It’s a massive industry that has sucked in the past,” Daher told TechCrunch. “People want a really high-quality solution to the bookkeeping problem. The market really wants this to exist and we’ve assembled a world-class team that’s capable of knocking this out of the park.”

San Francisco-based Pilot launched in 2017, more than a decade after the three founders met in MIT’s student computing group. It’s not surprising they’ve garnered attention from venture capitalists, given that their first two companies resulted in notable acquisitions.

Pilot has taken on a massively overlooked but strategic segment — bookkeeping,” Index’s Mark Goldberg told TechCrunch via email. “While dry on the surface, the opportunity is enormous given that an estimated $60 billion is spent on bookkeeping and accounting in the U.S. alone. It’s a service industry that can finally be automated with technology and this is the perfect team to take this on — third-time founders with a perfect combo of financial acumen and engineering.”

The trio of founders’ first project, Linux upgrade software called Ksplice, sold to Oracle in 2011. Their next business, Zulip, exited to Dropbox before it even had the chance to publicly launch.

It was actually upon building Ksplice that Daher and team realized their dire need for tech-enabled bookkeeping solutions.

“We built something internally like this as a byproduct of just running [Ksplice],” Daher explained. “When Oracle was acquiring our company, we met with their finance people and we described this system to them and they were blown away.”

It took a few years for the team to refocus their efforts on streamlining back-office processes for startups, opting to build business chat software in Zulip first.

Pilot’s software integrates with other financial services products to bring the bookkeeping process into the 21st century. Its platform, for example, works seamlessly on top of QuickBooks so customers aren’t wasting precious time updating and managing the accounting application.

“It’s better than the slow, painful process of doing it yourself and it’s better than hiring a third-party bookkeeper,” Daher said. “If you care at all about having the work be high-quality, you have to have software do it. People aren’t good at these mechanical, repetitive, formula-driven tasks.”

Currently, Pilot handles bookkeeping for more than $100 million per month in financial transactions but hopes to use the infusion of venture funding to accelerate customer adoption. The company also plans to launch a tax prep offering that they say will make the tax prep experience “easy and seamless.”

“It’s our first foray into Pilot’s larger mission, which is taking care of running your companies entire back office so you can focus on your business,” Daher said.

As for whether the team will sell to another big acquirer, it’s unlikely.

“The opportunity for Pilot is so large and so substantive, I think it would be a mistake for this to be anything other than a large and enduring public company,” Daher said. “This is the company that we’re going to do this with.”


Source: The Tech Crunch

Read More

MIT’s deflated balloon robot hand can pick up objects 100x its own weight

Posted by on Mar 14, 2019 in CSAIL, harvard, MIT, Robotics, soft robot | 0 comments

Soft, biologically inspired robots have become one of the field’s most exciting offshoots, with machines that are capable of squeezing between obstacles and conforming to the world around them. A joint project between MIT CSAIL and Harvard’s Wyss converts those learnings into a simple, soft robotic gripper capable of handling delicate objects and picking up things up to 100x its own weight.

The gripper itself is made of an origami-inspired skeletal structure, covered in either fabric or a deflated balloon. It’s a principle the team recently employed on another project designed to create low-cost artificial muscles. A connector attaches the gripper to the arm and also sports a vacuum tube that sucks air out from the gripper, collapsing it around an object.

Like Soft Robotics’ commercial gripper, the malleable nature of the device means it grab hold of a wide range of different objects with less need for a complex vision system. It also means that it can grab hold of delicate items without damaging them in the process.

“Previous approaches to the packing problem could only handle very limited classes of objects — objects that are very light or objects that conform to shapes such as boxes and cylinders, but with the Magic Ball gripper system we’ve shown that we can do pick-and-place tasks for a large variety of items ranging from wine bottles to broccoli, grapes and eggs,” MIT professor Daniela Rus says in a release tied to the news. “In other words, objects that are heavy and objects that are light. Objects that are delicate, or sturdy, or that have regular or free form shapes.”


Source: The Tech Crunch

Read More

Harvard-MIT initiative grants $750K to projects looking to keep tech accountable

Posted by on Mar 12, 2019 in Artificial Intelligence, funding, Government, harvard, Harvard University, Media, media lab, MIT, mit media lab, Philanthropy, Social, TC | 0 comments

Artificial intelligence, or what passes for it, can be found in practically every major tech company and, increasingly, in government programs. A joint Harvard-MIT program just unloaded $750,000 on projects looking to keep such AI developments well understood and well reported.

The Ethics and Governance in AI Initiative is a combination research program and grant fund operated by MIT’s Media Lab and Harvard’s Berkman-Klein Center. The small projects selected by the initiative are, generally speaking, aimed at using technology to keep people informed, or informing people about technology.

AI is an enabler of both good and ill in the world of news and information gathering, as the initiative’s director, Tim Hwang, said in a news release:

“On one hand, the technology offers a tremendous opportunity to improve the way we work — including helping journalists find key information buried in mountains of public records. Yet we are also seeing a range of negative consequences as AI becomes intertwined with the spread of misinformation and disinformation online.”

These grants are not the first the initiative has given out, but they are the first in response to an open call for ideas, Hwang noted.

The largest sum of the bunch, a $150,000 grant, went to MuckRock Foundation’s project Sidekick, which uses machine learning tools to help journalists scour thousands of pages of documents for interesting data. This is critical in a day and age when government and corporate records are so voluminous (for example, millions of emails leaked or revealed via FOIA) that it is basically impossible for a reporter or even team to analyze them without help.

Along the same lines is Legal Robot, which was awarded $100,000 for its plan to mass-request government contracts, then extract and organize the information within. This makes a lot of sense: People I’ve talked to in this sector have told me that the problem isn’t a lack of data but a surfeit of it, and poorly kept at that. Cleaning up messy data is going to be one of the first tasks any investigator or auditor of government systems will want to do.

Tattle is a project aiming to combat disinformation and false news spreading on WhatsApp, which, as we’ve seen, has been a major vector for it. It plans to use its $100,000 to establish channels for sourcing data from users, because, of course, much of WhatsApp is encrypted. Connecting this data with existing fact-checking efforts could help understand and mitigate harmful information going viral.

The Rochester Institute of Technology will be using its grant (also $100,000) to look into detecting manipulated video, both designing its own techniques and evaluating existing ones. Close inspection of the media will render a confidence score that can be displayed via a browser extension.

Other grants are going to AI-focused reporting work by The Seattle Times and by newsrooms in Latin America, and to workshops training local media in reporting AI and how it affects their communities.

To be clear, the initiative isn’t investing in these projects — just funding them with a handful of stipulations, Hwang explained to TechCrunch over email.

“Generally, our approach is to give grantees the freedom to experiment and run with the support that we give them,” he wrote. “We do not take any ownership stake but the products of these grants are released under open licenses to ensure the widest possible distribution to the public.”

He characterized the initiative’s grants as a way to pick up the slack that larger companies are leaving behind as they focus on consumer-first applications like virtual assistants.

“It’s naive to believe that the big corporate leaders in AI will ensure that these technologies are being leveraged in the public interest,” wrong Hwang. “Philanthropic funding has an important role to play in filling in the gaps and supporting initiatives that envision the possibilities for AI outside the for-profit context.”

You can read more about the initiative and its grantees here.


Source: The Tech Crunch

Read More

MIT’s insulin pill could replace injections for people with diabetes

Posted by on Feb 7, 2019 in Health, MIT, Science | 0 comments

Insulin pills have long been a kind of Holy Grail for people living with diabetes. A research team at MIT believes it may have taken an important step toward that dream with a new blueberry-sized capsule made of compressed insulin.

Once ingested, water dissolves a disk of sugar, using a spring to release a tiny needle made up almost entirely of freeze-dried insulin. The needle is injected into the stomach — which the patient can’t feel, owing to a lack of pain receptors in the stomach. Once the injection has occurred, the needle can break down in the digestive tract.

The pill is able to orient itself once swallowed, in order to make sure it injects in the right spot. That bit was apparently inspired by tortoise shells.

According to MIT, “The researchers drew their inspiration for the self-orientation feature from a tortoise known as the leopard tortoise. This tortoise, which is found in Africa, has a shell with a high, steep dome, allowing it to right itself if it rolls onto its back. The researchers used computer modeling to come up with a variant of this shape for their capsule, which allows it to reorient itself even in the dynamic environment of the stomach.”

So far, the team has been testing the pill successfully in pigs, delivering up to 300 micrograms of insulin in a go. No word on how long it might take to arrive in pharmacies.


Source: The Tech Crunch

Read More

Fabula AI is using social spread to spot ‘fake news’

Posted by on Feb 6, 2019 in Amazon, api, Artificial Intelligence, deep learning, Emerging-Technologies, Europe, European Research Council, Facebook, fake news, Imperial College London, London, machine learning, Mark Zuckerberg, Media, MIT, Myanmar, Social, Social Media, social media platforms, social media regulation, social network, social networks, Startups, TC, United Kingdom | 0 comments

UK startup Fabula AI reckons it’s devised a way for artificial intelligence to help user generated content platforms get on top of the disinformation crisis that keeps rocking the world of social media with antisocial scandals.

Even Facebook’s Mark Zuckerberg has sounded a cautious note about AI technology’s capability to meet the complex, contextual, messy and inherently human challenge of correctly understanding every missive a social media user might send, well-intentioned or its nasty flip-side.

“It will take many years to fully develop these systems,” the Facebook founder wrote two years ago, in an open letter discussing the scale of the challenge of moderating content on platforms thick with billions of users. “This is technically difficult as it requires building AI that can read and understand news.”

But what if AI doesn’t need to read and understand news in order to detect whether it’s true or false?

Step forward Fabula, which has patented what it dubs a “new class” of machine learning algorithms to detect “fake news” — in the emergent field of “Geometric Deep Learning”; where the datasets to be studied are so large and complex that traditional machine learning techniques struggle to find purchase on this ‘non-Euclidean’ space.

The startup says its deep learning algorithms are, by contrast, capable of learning patterns on complex, distributed data sets like social networks. So it’s billing its technology as a breakthrough. (Its written a paper on the approach which can be downloaded here.)

It is, rather unfortunately, using the populist and now frowned upon badge “fake news” in its PR. But it says it’s intending this fuzzy umbrella to refer to both disinformation and misinformation. Which means maliciously minded and unintentional fakes. Or, to put it another way, a photoshopped fake photo or a genuine image spread in the wrong context.

The approach it’s taking to detecting disinformation relies not on algorithms parsing news content to try to identify malicious nonsense but instead looks at how such stuff spreads on social networks — and also therefore who is spreading it.

There are characteristic patterns to how ‘fake news’ spreads vs the genuine article, says Fabula co-founder and chief scientist, Michael Bronstein.

“We look at the way that the news spreads on the social network. And there is — I would say — a mounting amount of evidence that shows that fake news and real news spread differently,” he tells TechCrunch, pointing to a recent major study by MIT academics which found ‘fake news’ spreads differently vs bona fide content on Twitter.

“The essence of geometric deep learning is it can work with network-structured data. So here we can incorporate heterogenous data such as user characteristics; the social network interactions between users; the spread of the news itself; so many features that otherwise would be impossible to deal with under machine learning techniques,” he continues.

Bronstein, who is also a professor at Imperial College London, with a chair in machine learning and pattern recognition, likens the phenomenon Fabula’s machine learning classifier has learnt to spot to the way infectious disease spreads through a population.

“This is of course a very simplified model of how a disease spreads on the network. In this case network models relations or interactions between people. So in a sense you can think of news in this way,” he suggests. “There is evidence of polarization, there is evidence of confirmation bias. So, basically, there are what is called echo chambers that are formed in a social network that favor these behaviours.”

“We didn’t really go into — let’s say — the sociological or the psychological factors that probably explain why this happens. But there is some research that shows that fake news is akin to epidemics.”

The tl;dr of the MIT study, which examined a decade’s worth of tweets, was that not only does the truth spread slower but also that human beings themselves are implicated in accelerating disinformation. (So, yes, actual human beings are the problem.) Ergo, it’s not all bots doing all the heavy lifting of amplifying junk online.

The silver lining of what appears to be an unfortunate quirk of human nature is that a penchant for spreading nonsense may ultimately help give the stuff away — making a scalable AI-based tool for detecting ‘BS’ potentially not such a crazy pipe-dream.

Although, to be clear, Fabula’s AI remains in development at this stage, having been tested internally on Twitter data sub-sets at this stage. And the claims it’s making for its prototype model remain to be commercially tested with customers in the wild using the tech across different social platforms.

It’s hoping to get there this year, though, and intends to offer an API for platforms and publishers towards the end of this year. The AI classifier is intended to run in near real-time on a social network or other content platform, identifying BS.

Fabula envisages its own role, as the company behind the tech, as that of an open, decentralised “truth-risk scoring platform” — akin to a credit referencing agency just related to content, not cash.

Scoring comes into it because the AI generates a score for classifying content based on how confident it is it’s looking at a piece of fake vs true news.

A visualisation of a fake vs real news distribution pattern; users who predominantly share fake news are coloured red and users who don’t share fake news at all are coloured blue — which Fabula says shows the clear separation into distinct groups, and “the immediately recognisable difference in spread pattern of dissemination”.

In its own tests Fabula says its algorithms were able to identify 93 percent of “fake news” within hours of dissemination — which Bronstein claims is “significantly higher” than any other published method for detecting ‘fake news’. (Their accuracy figure uses a standard aggregate measurement of machine learning classification model performance, called ROC AUC.)

The dataset the team used to train their model is a subset of Twitter’s network — comprised of around 250,000 users and containing around 2.5 million “edges” (aka social connections).

For their training dataset Fabula relied on true/fake labels attached to news stories by third party fact checking NGOs, including Snopes and PolitiFact. And, overall, pulling together the dataset was a process of “many months”, according to Bronstein, He also says that around a thousand different stories were used to train the model, adding that the team is confident the approach works on small social networks, as well as Facebook-sized mega-nets.

Asked whether he’s sure the model hasn’t been trained to identified patterns caused by bot-based junk news spreaders, he says the training dataset included some registered (and thus verified ‘true’) users.

“There is multiple research that shows that bots didn’t play a significant amount [of a role in spreading fake news] because the amount of it was just a few percent. And bots can be quite easily detected,” he also suggests, adding: “Usually it’s based on some connectivity analysis or content analysis. With our methods we can also detect bots easily.”

To further check the model, the team tested its performance over time by training it on historical data and then using a different split of test data.

“While we see some drop in performance it is not dramatic. So the model ages well, basically. Up to something like a year the model can still be applied without any re-training,” he notes, while also saying that, when applied in practice, the model would be continually updated as it keeps digesting (ingesting?) new stories and social media content.

Somewhat terrifyingly, the model could also be used to predict virality, according to Bronstein — raising the dystopian prospect of the API being used for the opposite purpose to that which it’s intended: i.e. maliciously, by fake news purveyors, to further amp up their (anti)social spread.

“Potentially putting it into evil hands it might do harm,” Bronstein concedes. Though he takes a philosophical view on the hyper-powerful double-edged sword of AI technology, arguing such technologies will create an imperative for a rethinking of the news ecosystem by all stakeholders, as well as encouraging emphasis on user education and teaching critical thinking.

Let’s certainly hope so. And, on the educational front, Fabula is hoping its technology can play an important role — by spotlighting network-based cause and effect.

“People now like or retweet or basically spread information without thinking too much or the potential harm or damage they’re doing to everyone,” says Bronstein, pointing again to the infectious diseases analogy. “It’s like not vaccinating yourself or your children. If you think a little bit about what you’re spreading on a social network you might prevent an epidemic.”

So, tl;dr, think before you RT.

Returning to the accuracy rate of Fabula’s model, while ~93 per cent might sound pretty impressive, if it were applied to content on a massive social network like Facebook — which has some 2.3BN+ users, uploading what could be trillions of pieces of content daily — even a seven percent failure rate would still make for an awful lot of fakes slipping undetected through the AI’s net.

But Bronstein says the technology does not have to be used as a standalone moderation system. Rather he suggests it could be used in conjunction with other approaches such as content analysis, and thus function as another string on a wider ‘BS detector’s bow.

It could also, he suggests, further aid human content reviewers — to point them to potentially problematic content more quickly.

Depending on how the technology gets used he says it could do away with the need for independent third party fact-checking organizations altogether because the deep learning system can be adapted to different use cases.

Example use-cases he mentions include an entirely automated filter (i.e. with no human reviewer in the loop); or to power a content credibility ranking system that can down-weight dubious stories or even block them entirely; or for intermediate content screening to flag potential fake news for human attention.

Each of those scenarios would likely entail a different truth-risk confidence score. Though most — if not all — would still require some human back-up. If only to manage overarching ethical and legal considerations related to largely automated decisions. (Europe’s GDPR framework has some requirements on that front, for example.)

Facebook’s grave failures around moderating hate speech in Myanmar — which led to its own platform becoming a megaphone for terrible ethnical violence — were very clearly exacerbated by the fact it did not have enough reviewers who were able to understand (the many) local languages and dialects spoken in the country.

So if Fabula’s language-agnostic propagation and user focused approach proves to be as culturally universal as its makers hope, it might be able to raise flags faster than human brains which lack the necessary language skills and local knowledge to intelligently parse context.

“Of course we can incorporate content features but we don’t have to — we don’t want to,” says Bronstein. “The method can be made language independent. So it doesn’t matter whether the news are written in French, in English, in Italian. It is based on the way the news propagates on the network.”

Although he also concedes: “We have not done any geographic, localized studies.”

“Most of the news that we take are from PolitiFact so they somehow regard mainly the American political life but the Twitter users are global. So not all of them, for example, tweet in English. So we don’t yet take into account tweet content itself or their comments in the tweet — we are looking at the propagation features and the user features,” he continues.

“These will be obviously next steps but we hypothesis that it’s less language dependent. It might be somehow geographically varied. But these will be already second order details that might make the model more accurate. But, overall, currently we are not using any location-specific or geographic targeting for the model.

“But it will be an interesting thing to explore. So this is one of the things we’ll be looking into in the future.”

Fabula’s approach being tied to the spread (and the spreaders) of fake news certainly means there’s a raft of associated ethical considerations that any platform making use of its technology would need to be hyper sensitive to.

For instance, if platforms could suddenly identify and label a sub-set of users as ‘junk spreaders’ the next obvious question is how will they treat such people?

Would they penalize them with limits — or even a total block — on their power to socially share on the platform? And would that be ethical or fair given that not every sharer of fake news is maliciously intending to spread lies?

What if it turns out there’s a link between — let’s say — a lack of education and propensity to spread disinformation? As there can be a link between poverty and education… What then? Aren’t your savvy algorithmic content downweights risking exacerbating existing unfair societal divisions?

Bronstein agrees there are major ethical questions ahead when it comes to how a ‘fake news’ classifier gets used.

“Imagine that we find a strong correlation between the political affiliation of a user and this ‘credibility’ score. So for example we can tell with hyper-ability that if someone is a Trump supporter then he or she will be mainly spreading fake news. Of course such an algorithm would provide great accuracy but at least ethically it might be wrong,” he says when we ask about ethics.

He confirms Fabula is not using any kind of political affiliation information in its model at this point — but it’s all too easy to imagine this sort of classifier being used to surface (and even exploit) such links.

“What is very important in these problems is not only to be right — so it’s great of course that we’re able to quantify fake news with this accuracy of ~90 percent — but it must also be for the right reasons,” he adds.

The London-based startup was founded in April last year, though the academic research underpinning the algorithms has been in train for the past four years, according to Bronstein.

The patent for their method was filed in early 2016 and granted last July.

They’ve been funded by $500,000 in angel funding and about another $500,000 in total of European Research Council grants plus academic grants from tech giants Amazon, Google and Facebook, awarded via open research competition awards.

(Bronstein confirms the three companies have no active involvement in the business. Though doubtless Fabula is hoping to turn them into customers for its API down the line. But he says he can’t discuss any potential discussions it might be having with the platforms about using its tech.)

Focusing on spotting patterns in how content spreads as a detection mechanism does have one major and obvious drawback — in that it only works after the fact of (some) fake content spread. So this approach could never entirely stop disinformation in its tracks.

Though Fabula claims detection is possible within a relatively short time frame — of between two and 20 hours after content has been seeded onto a network.

“What we show is that this spread can be very short,” he says. “We looked at up to 24 hours and we’ve seen that just in a few hours… we can already make an accurate prediction. Basically it increases and slowly saturates. Let’s say after four or five hours we’re already about 90 per cent.”

“We never worked with anything that was lower than hours but we could look,” he continues. “It really depends on the news. Some news does not spread that fast. Even the most groundbreaking news do not spread extremely fast. If you look at the percentage of the spread of the news in the first hours you get maybe just a small fraction. The spreading is usually triggered by some important nodes in the social network. Users with many followers, tweeting or retweeting. So there are some key bottlenecks in the network that make something viral or not.”

A network-based approach to content moderation could also serve to further enhance the power and dominance of already hugely powerful content platforms — by making the networks themselves core to social media regulation, i.e. if pattern-spotting algorithms rely on key network components (such as graph structure) to function.

So you can certainly see why — even above a pressing business need — tech giants are at least interested in backing the academic research. Especially with politicians increasingly calling for online content platforms to be regulated like publishers.

At the same time, there are — what look like — some big potential positives to analyzing spread, rather than content, for content moderation purposes.

As noted above, the approach doesn’t require training the algorithms on different languages and (seemingly) cultural contexts — setting it apart from content-based disinformation detection systems. So if it proves as robust as claimed it should be more scalable.

Though, as Bronstein notes, the team have mostly used U.S. political news for training their initial classifier. So some cultural variations in how people spread and react to nonsense online at least remains a possibility.

A more certain challenge is “interpretability” — aka explaining what underlies the patterns the deep learning technology has identified via the spread of fake news.

While algorithmic accountability is very often a challenge for AI technologies, Bronstein admits it’s “more complicated” for geometric deep learning.

“We can potentially identify some features that are the most characteristic of fake vs true news,” he suggests when asked whether some sort of ‘formula’ of fake news can be traced via the data, noting that while they haven’t yet tried to do this they did observe “some polarization”.

“There are basically two communities in the social network that communicate mainly within the community and rarely across the communities,” he says. “Basically it is less likely that somebody who tweets a fake story will be retweeted by somebody who mostly tweets real stories. There is a manifestation of this polarization. It might be related to these theories of echo chambers and various biases that exist. Again we didn’t dive into trying to explain it from a sociological point of view — but we observed it.”

So while, in recent years, there have been some academic efforts to debunk the notion that social media users are stuck inside filter bubble bouncing their own opinions back at them, Fabula’s analysis of the landscape of social media opinions suggests they do exist — albeit, just not encasing every Internet user.

Bronstein says the next steps for the startup is to scale its prototype to be able to deal with multiple requests so it can get the API to market in 2019 — and start charging publishers for a truth-risk/reliability score for each piece of content they host.

“We’ll probably be providing some restricted access maybe with some commercial partners to test the API but eventually we would like to make it useable by multiple people from different businesses,” says requests. “Potentially also private users — journalists or social media platforms or advertisers. Basically we want to be… a clearing house for news.”


Source: The Tech Crunch

Read More

How students are founding, funding and joining startups

Posted by on Feb 6, 2019 in Accel, Accel Scholars, Alumni Ventures Group, Amanda Bradford, Artificial Intelligence, Bill Gates, boston, coinbase, Column, CRM, CrunchBase, distributed systems, Dorm Room Fund, Drew Houston, Dropbox, editor-in-chief, Energy, entrepreneurship, Facebook, Finance, FiscalNote, Forward, General Catalyst, Graduate Fund, greylock, harvard, Jeremy Liew, Kleiner Perkins, lightspeed, Mark Zuckerberg, MIT, Pear Ventures, peter boyce, Pinterest, Private Equity, Series A, stanford, Start-Up Chile, Startup company, Startups, TC, TechStars, True Ventures, Ubiquity6, uc-berkeley, United States, upenn, Venture Capital, venture capital Firms, Warby Parker, Y Combinator | 0 comments

There has never been a better time to start, join or fund a startup as a student. 

Young founders who want to start companies while still in school have an increasing number of resources to tap into that exist just for them. Students that want to learn how to build companies can apply to an increasing number of fast-track programs that allow them to gain valuable early stage operating experience. The energy around student entrepreneurship today is incredible. I’ve been immersed in this community as an investor and adviser for some time now, and to say the least, I’m continually blown away by what the next generation of innovators are dreaming up (from Analytical Space’s global data relay service for satellites to Brooklinen’s reinvention of the luxury bed).

Bill Gates in 1973

First, let’s look at student founders and why they’re important. Student entrepreneurs have long been an important foundation of the startup ecosystem. Many students wrestle with how best to learn while in school —some students learn best through lectures, while more entrepreneurial students like author Julian Docks find it best to leave the classroom altogether and build a business instead.

Indeed, some of our most iconic founders are Microsoft’s Bill Gates and Facebook’s Mark Zuckerberg, both student entrepreneurs who launched their startups at Harvard and then dropped out to build their companies into major tech giants. A sample of the current generation of marquee companies founded on college campuses include Snap at Stanford ($29B valuation at IPO), Warby Parker at Wharton (~$2B valuation), Rent The Runway at HBS (~$1B valuation), and Brex at Stanford (~$1B valuation).

Some of today’s most celebrated tech leaders built their first ventures while in school — even if some student startups fail, the critical first-time founder experience is an invaluable education in how to build great companies. Perhaps the best example of this that I could find is Drew Houston at Dropbox (~$9B valuation at IPO), who previously founded an edtech startup at MIT that, in his words, provided a: “great introduction to the wild world of starting companies.”

Student founders are everywhere, but the highest concentration of venture-backed student founders can be found at just 5 universities. Based on venture fund portfolio data from the last six years, Harvard, Stanford, MIT, UPenn, and UC Berkeley have produced the highest number of student-founded companies that went on to raise $1 million or more in seed capital. Some prospective students will even enroll in a university specifically for its reputation of churning out great entrepreneurs. This is not to say that great companies are not being built out of other universities, nor does it mean students can’t find resources outside a select number of schools. As you can see later in this essay, there are a number of new ways students all around the country can tap into the startup ecosystem. For further reading, PitchBook produces an excellent report each year that tracks where all entrepreneurs earned their undergraduate degrees.

Student founders have a number of new media resources to turn to. New email newsletters focused on student entrepreneurship like Justine and Olivia Moore’s Accelerated and Kyle Robertson’s StartU offer new channels for young founders to reach large audiences. Justine and Olivia, the minds behind Accelerated, have a lot of street cred— they launched Stanford’s on-campus incubator Cardinal Ventures before landing as investors at CRV.

StartU goes above and beyond to be a resource to founders they profile by helping to connect them with investors (they’re active at 12 universities), and run a podcast hosted by their Editor-in-Chief Johnny Hammond that is top notch. My bet is that traditional media will point a larger spotlight at student entrepreneurship going forward.

New pools of capital are also available that are specifically for student founders. There are four categories that I call special attention to:

  • University-affiliated accelerator programs
  • University-affiliated angel networks
  • Professional venture funds investing at specific universities
  • Professional venture funds investing through student scouts

While it is difficult to estimate exactly how much capital has been deployed by each, there is no denying that there has been an explosion in the number of programs that address the pre-seed phase. A sample of the programs available at the Top 5 universities listed above are in the graphic below — listing every resource at every university would be difficult as there are so many.

One alumni-centric fund to highlight is the Alumni Ventures Group, which pools LP capital from alumni at specific universities, then launches individual venture funds that invest in founders connected to those universities (e.g. students, alumni, professors, etc.). Through this model, they’ve deployed more than $200M per year! Another highlight has been student scout programs — which vary in the degree of autonomy and capital invested — but essentially empower students to identify and fund high-potential student-founded companies for their parent venture funds. On campuses with a large concentration of student founders, it is not uncommon to find student scouts from as many as 12 different venture funds actively sourcing deals (as is made clear from David Tao’s analysis at UC Berkeley).

Investment Team at Rough Draft Ventures

In my opinion, the two institutions that have the most expansive line of sight into the student entrepreneurship landscape are First Round’s Dorm Room Fund and General Catalyst’s Rough Draft VenturesSince 2012, these two funds have operated a nationwide network of student scouts that have invested $20K — $25K checks into companies founded by student entrepreneurs at 40+ universities. “Scout” is a loose term and doesn’t do it justice — the student investors at these two funds are almost entirely autonomous, have built their own platform services to support portfolio companies, and have launched programs to incubate companies built by female founders and founders of color. Another student-run fund worth noting that has reach beyond a single region is Contrary Capital, which raised $2.2M last year. They do a particularly great job of reaching founders at a diverse set of schools — their network of student scouts are active at 45 universities and have spoken with 3,000 founders per year since getting started. Contrary is also testing out what they describe as a “YC for university-based founders”. In their first cohort, 100% of their companies raised a pre-seed round after Contrary’s demo day. Another even more recently launched organization is The MBA Fund, which caters to founders from the business schools at Harvard, Wharton, and Stanford. While super exciting, these two funds only launched very recently and manage portfolios that are not large enough for analysis just yet.

Over the last few months, I’ve collected and cross-referenced publicly available data from both Dorm Room Fund and Rough Draft Ventures to assess the state of student entrepreneurship in the United States. Companies were pulled from each fund’s portfolio page, then checked against Crunchbase for amount raised, accelerator participation, and other metrics. If you’d like to sift through the data yourself, feel free to ping me — my email can be found at the end of this article. To be clear, this does not represent the full scope of investment activity at either fund — many companies in the portfolios of both funds remain confidential and unlisted for good reasons (e.g. startups working in stealth). In fact, the In addition, data for early stage companies is notoriously variable in quality, even with Crunchbase. You should read these insights as directional only, given the debatable confidence interval. Still, the data is still interesting and give good indicators for the health of student entrepreneurship today.

Dorm Room Fund and Rough Draft Ventures have invested in 230+ student-founded companies that have gone on to raise nearly $1 billion in follow on capital. These funds have invested in a diverse range of companies, from govtech (e.g. mark43, raised $77M+ and FiscalNote, raised $50M+) to space tech (e.g. Capella Space, raised ~$34M). Several portfolio companies have had successful exits, such as crypto startup Distributed Systems (acquired by Coinbase) and social networking startup tbh (acquired by Facebook). While it is too early to evaluate the success of these funds on a returns basis (both were launched just 6 years ago), we can get a sense of success by evaluating the rates by which portfolio companies raise additional capital. Taken together, 34% of DRF and RDV companies in our data set have raised $1 million or more in seed capital. For a rough comparison, CB Insights cites that 40% of YC companies and 48% of Techstars companies successfully raise follow on capital (defined as anything above $750K). Certainly within the ballpark!

Source: Crunchbase

Dorm Room Fund and Rough Draft Ventures companies in our data set have an 11–12% rate of survivorship to Series A. As a benchmark, a previous partner at Y Combinator shared that 20% of their accelerator companies raise Series A capital (YC declined to share the official figure, but it’s likely a stat that is increasing given their new Series A support programs. For further reading, check out YC’s reflection on what they’ve learned about helping their companies raise Series A funding). In any case, DRF and RDV’s numbers should be taken with a grain of salt, as the average age of their portfolio companies is very low and raising Series A rounds generally takes time. Ultimately, it is clear that DRF and RDV are active in the earlier (and riskier) phases of the startup journey.

Dorm Room Fund and Rough Draft Ventures send 18–25% of their portfolio companies to Y Combinator or Techstars. Given YC’s 1.5% acceptance rate as reported in Fortune, this is quite significant! Internally, these two funds offer founders an opportunity to participate in mock interviews with YC and Techstars alumni, as well as tap into their communities for peer support (e.g. advice on pitch decks and application content). As a result, Dorm Room Fund and Rough Draft Ventures regularly send cohorts of founders to these prestigious accelerator programs. Based on our data set, 17–20% of DRF and RDV companies that attend one of these accelerators end up raising Series A venture financing.

Source: Crunchbase

Dorm Room Fund and Rough Draft Ventures don’t invest in the same companies. When we take a deeper look at one specific ecosystem where these two funds have been equally active over the last several years — Boston — we actually see that the degree of investment overlap for companies that have raised $1M+ seed rounds sits at 26%. This suggests that these funds are either a) seeing different dealflow or b) have widely different investment decision-making.

Source: Crunchbase

Dorm Room Fund and Rough Draft Ventures should not just be measured by a returns-basis today, as it’s too early. I hypothesize that DRF and RDV are actually encouraging more entrepreneurial activity in the ecosystem (more students decide to start companies while in school) as well as improving long-term founder outcomes amongst students they touch (portfolio founders build bigger and more successful companies later in their careers). As more students start companies, there’s likely a positive feedback loop where there’s increasing peer pressure to start a company or lean on friends for founder support (e.g. feedback, advice, etc).Both of these subjects warrant additional study, but it’s likely too early to conduct these analyses today.

Dorm Room Fund and Rough Draft Ventures have impressive alumni that you will want to track. 1 in 4 alumni partners are founders, and 29% of these founder alumni have raised $1M+ seed rounds for their companies. These include Anjney Midha’s augmented reality startup Ubiquity6 (raised $37M+), Shubham Goel’s investor-focused CRM startup Affinity (raised $13M+), Bruno Faviero’s AI security software startup Synapse (raised $6M+), Amanda Bradford’s dating app The League (raised $2M+), and Dillon Chen’s blockchain startup Commonwealth Labs (raised $1.7M). It makes sense to me that alumni from these communities that decide to start companies have an advantage over their peers — they know what good companies look like and they can tap into powerful networks of young talent / experienced investors.

Beyond Dorm Room Fund and Rough Draft Ventures, some venture capital firms focus on incubation for student-founded startups. Credit should first be given to Lightspeed for producing the amazing Summer Fellows bootcamp experience for promising student founders — after all, Pinterest was built there! Jeremy Liew gives a good overview of the program through his sit-down interview with Afterbox’s Zack Banack. Based on a study they conducted last year, 40% of Lightspeed Summer Fellows alumni are currently active founders. Pear Ventures also has an impressive summer incubator program where 85% of its companies successfully complete a fundraise. Index Ventures is the latest to build an incubator program for student founders, and even accepts founders who want to work on an idea part-time while completing a summer internship.

Let’s now look at students who want to join a startup before founding one. Venture funds have historically looked to tap students for talent, and are expanding the engagement lifecycle. The longest running programs include Kleiner Perkins’<strong class=”m_1196721721246259147gmail-markup–strong m_1196721721246259147gmail-markup–p-strong”> KP Fellows and True Ventures’ TEC Fellows, which focus on placing the next generation’s most promising product managers, engineers, and designers into the portfolio companies of their parent venture funds.

There’s also the secretive Greylock X, a referral-based hand-picked group of the best student engineers in Silicon Valley (among their impressive alumni are founders like Yasyf Mohamedali and Joe Kahn, the folks behind First Round-backed Karuna Health). As these programs have matured, these firms have recognized the long-run value of engaging the alumni of their programs.

More and more alumni are “coming back” to the parent funds as entrepreneurs, like KP Fellow Dylan Field of Figma (and is also hosting a KP Fellow, closing a full circle loop!). Based on their latest data, 10% of KP Fellows alumni are founders — that’s a lot given the fact that their community has grown to 500! This helps explain why Kleiner Perkins has created a structured path to receive $100K in seed funding to companies founded by KP Fellow alumni. It looks like venture funds are beginning to invest in student programs as part of their larger platform strategy, which can have a real impact over the long term (for further reading, see this analysis of platform strategy outcomes by USV’s Bethany Crystal).

KP Fellows in San Francisco

Venture funds are doubling down on student talent engagement — in just the last 18 months, 4 funds have launched student programs. It’s encouraging to see new funds follow in the footsteps of First Round, General Catalyst, Kleiner Perkins, Greylock, and Lightspeed. In 2017, Accel launched their Accel Scholars program to engage top talent at UC Berkeley and Stanford. In 2018, we saw 8VC Fellows, NEA Next, and Floodgate Insiders all launch, targeting elite universities outside of Silicon Valley. Y Combinator implemented Early Decision, which allows student founders to apply one batch early to help with academic scheduling. Most recently, at the start of 2019, First Round launched the Graduate Fund (staffed by Dorm Room Fund alumni) to invest in founders who are recent graduates or young alumni.

Given more time, I’d love to study the rates by which student founders start another company following investments from student scout funds, as well as whether or not they’re more successful in those ventures. In any case, this is an escalation in the number of venture funds that have started to get serious about engaging students — both for talent and dealflow.

Student entrepreneurship 2.0 is here. There are more structured paths to success for students interested in starting or joining a startup. Founders have more opportunities to garner press, seek advice, raise capital, and more. Venture funds are increasingly leveraging students to help improve the three F’s — finding, funding, and fixing. In my personal view, I believe it is becoming more and more important for venture funds to gain mindshare amongst the next generation of founders and operators early, while still in school.

I can’t wait to see what’s next for student entrepreneurship in 2019. If you’re interested in digging in deeper (I’m human — I’m sure I haven’t covered everything related to student entrepreneurship here) or learning more about how you can start or join a startup while still in school, shoot me a note at sxu@dormroomfund.comA massive thanks to Phin Barnes, Rei Wang, Chauncey Hamilton, Peter Boyce, Natalie Bartlett, Denali Tietjen, Eric Tarczynski, Will Robbins, Jasmine Kriston, Alicia Lau, Johnny Hammond, Bruno Faviero, Athena Kan, Shohini Gupta, Alex Immerman, Albert Dong, Phillip Hua-Bon-Hoa, and Trevor Sookraj for your incredible encouragement, support, and insight during the writing of this essay.


Source: The Tech Crunch

Read More

China wants to keep its spot as a leader in the space race with plans to launch 30 missions

Posted by on Jan 31, 2019 in Asia, China, commercial spaceflight, ispace, MIT, outer space, Space, spaceflight, SpaceX, TC, United States | 0 comments

Keeping its spot among the top countries competing in the space race, China is planning to launch 30 missions this year, according to information from the state-run China Aerospace Science and Technology Corp., reported by the Xinhua news agency.

Last year, China outpaced the United States in the number of national launches it had completed through the middle of December, according to a report in the MIT Technology Review. Public and private Chinese companies launched 35 missions that were reported to the public through 2018 compared to 30 from the U.S., wrote Joan Johnson-Freese, a professor of national security affairs at the Naval War College.

“Privately funded space startups are changing China’s space industry,” Johnson-Freese wrote at the time. “And even without their help, China is poised to become a space power on par with the United States.”

Major missions for 2019 will include the Long March-5 large carrier rocket, whose last launch was marred by malfunction. If the new Long March launch goes well, China will stage another flight to launch a probe designed to bring lunar samples back to Earth at the end of 2019.

China will also send still another version of the Long March rocket to lay the groundwork for the country’s private space station.

While the bulk of China’s activity in space is being handled through government ministries and state-owned companies, private companies are starting to make their mark, as well.

LandSpace, OneSpace and iSpace form a triumvirate of privately held Chinese companies that are all developing launch vehicles and planning to carry payloads to space.

In all, using some back of the napkin math and the calendar of launches available at Spaceflight Insider, there were roughly 80 major rocket launches scheduled.

Those figures mean that over once a week a rocket blasted off to deliver some sort of payload to a place above the atmosphere. RocketLab put its first commercial payload into orbit in November, and launched a second rocket the following month. Meanwhile, SpaceX, the darling of the private space industry, launched 21 rockets itself.


Source: The Tech Crunch

Read More

Hospital in China denies links to world’s first gene-edited babies

Posted by on Nov 26, 2018 in Asia, Baidu, Biotech, Cancer, China, Genetics, hiv, MIT, shenzhen, TC | 0 comments

News of the world’s first ever gene-edited human babies being born in China caused a huge stir on Monday after the MIT Technology Review and the Associated Press brought the project to light. People in and outside China rushed to question the ethical implications of the scientific breakthrough, reportedly the fruit of a Chinese researcher named He Jiankui from a university in Shenzhen.

There’s another twist to the story.

According to the AP, He had sought and received approval from Shenzhen HarMoniCare Women’s and Children’s Hospital to kick off the experiment. The MIT Technology Review’s report also linked to documents stating that He’s research received the green light from HarMoniCare’s medical ethics committee.

When contacted by TechCrunch, however, a HarMoniCare spokesperson said she was not aware of He’s genetic test and that the hospital is probing the validity of the circulated documents. TechCrunch will update when the case makes progress.

“What we can say for sure is that the gene editing process did not take place at our hospital. The babies were not born here either,” the spokesperson said of He’s project.

He, who studied at Rice and Standford Universities, led a research team at Southern University of Science and Technology which set out to eliminate the gene associated with HIV, smallpox, and cholera by utilizing the CRISPR gene-editing tool, according to the MIT Technology Review. The technology is ethically fraught because changes to the embryo will pass on to future generations. He’s daring initiative is set to cause debate at the upcoming Second International Summit on Human Genome Editing in Hong Kong, which he will attend.

It’s also noteworthy that HarMoniCare belongs to the vast Putian network, a fold of 8,000 private healthcare providers originated from Putian, Fujian province. That’s according to a list compiled by DXY.cn, a Chinese online community for healthcare professionals. Putian hospitals expanded across China quickly over the years with little government oversight until the death of a college student. In 2016, 21-year-old Wei Zexi died of cancer after receiving dubious treatment from a Putian hospital. The incident also provoked a public outcry over China’s largest search engine Baidu, which counted Putian hospitals as a major online advertiser.


Source: The Tech Crunch

Read More

With no moving parts, this plane flies on the ionic wind

Posted by on Nov 22, 2018 in Aircraft, MIT, Science, Transportation | 0 comments

Since planes were invented, they’ve flown using moving parts to push air around. Sure, there are gliders and dirigibles, which float more than fly, but powered flight is all about propellers (that’s why they call them that). Today that changes, with the first-ever “solid state” aircraft, flying with no moving parts at all by generating “ionic wind.”

If it sounds like science fiction… well, that’s about right. MIT’s Stephen Barrett explains that he took his inspiration directly from Star Trek.

“In the long-term future, planes shouldn’t have propellers and turbines,” Barrett said in an MIT news release. “They should be more like the shuttles in ‘Star Trek,’ that have just a blue glow and silently glide.”

“When I got an appointment at university,” he explained, “I thought, well, now I’ve got the opportunity to explore this, and started looking for physics that enabled that to happen.”

He didn’t discover the principle that ended up making his team’s craft fly — it’s been known about for nearly a century, but has never been able to be applied successfully to flight.

The basic idea is simply that when you have a powerful source of negatively charged electrons, they pass that charge on to the air around them, “ionizing” it, at which point it flows away from that source and toward — if you set it up right — a “collector” surface nearby. (Nature has a much more detailed explanation. The team’s paper was published in the journal today.)

Essentially what you’re doing is making negatively charged air flow in a direction you choose. This phenomenon was recognized in the ’20s, and in the ’60s they even attempted some thrust tests using it. But they were only able to get about 1 percent of the input electricity to work as thrust. That’s inefficient, to say the least.

To tell the truth, Barrett et al.’s system doesn’t do a lot better, only getting 2.6 percent of the input energy back as thrust, but they have the benefit of computer-aided design and super-lightweight materials. The team determined that at a certain weight and wingspan, and with the thrust that can be generated at that scale, flight should theoretically be possible. They’ve spent years pursuing it.

After many, many revisions (and as many crashes) they arrived at this 5-meter-wide, 2.5-kilogram, multi-decker craft, and after a few false starts it flew… for about 10 seconds. They were limited by the length of the room they tested in, and figure it could go farther, but the very fact that it was able to sustain flight significantly beyond the bounds of gliding is proof enough of the concept.

“This is the first-ever sustained flight of a plane with no moving parts in the propulsion system,” Barrett Said. “This has potentially opened new and unexplored possibilities for aircraft which are quieter, mechanically simpler, and do not emit combustion emissions.”

No one, least of all the crew, thinks this is going to replace propellers or jet engines any time soon. But there are lots of applications for a silent and mechanically simple form of propulsion — drones, for instance, could use it for small adjustments or to create soft landings.

There’s lots of work to do. But the goal was to invent a solid-state flying machine, and that’s what they did. The rest is just engineering.


Source: The Tech Crunch

Read More

MIT researchers teach a neural network to recognize depression

Posted by on Sep 4, 2018 in Apps, depression, mental health, MIT, psychology, TC | 0 comments

A new technology by MIT researchers can sense depression by analyzing the written and spoken responses by a patient. The system, pioneered by MIT’s CSAIL group, uses “a neural-network model that can be unleashed on raw text and audio data from interviews to discover speech patterns indicative of depression.”

“Given a new subject, it can accurately predict if the individual is depressed, without needing any other information about the questions and answers,” the researchers write.

The most important part of the system is that it is context-free. This means that it doesn’t require specific questions or types of responses. It simply uses day-to-day interactions as the source data.

“We call it ‘context-free,’ because you’re not putting any constraints into the types of questions you’re looking for and the type of responses to those questions,” said researcher Tuka Alhanai.

“Every patient will talk differently, and if the model sees changes maybe it will be a flag to the doctors,” said study co-author James Glass. “This is a step forward in seeing if we can do something assistive to help clinicians.”

From the release:

The researchers trained and tested their model on a dataset of 142 interactions from the Distress Analysis Interview Corpus that contains audio, text, and video interviews of patients with mental-health issues and virtual agents controlled by humans. Each subject is rated in terms of depression on a scale between 0 to 27, using the Personal Health Questionnaire. Scores above a cutoff between moderate (10 to 14) and moderately severe (15 to 19) are considered depressed, while all others below that threshold are considered not depressed. Out of all the subjects in the dataset, 28 (20 percent) are labeled as depressed.

In experiments, the model was evaluated using metrics of precision and recall. Precision measures which of the depressed subjects identified by the model were diagnosed as depressed. Recall measures the accuracy of the model in detecting all subjects who were diagnosed as depressed in the entire dataset. In precision, the model scored 71 percent and, on recall, scored 83 percent. The averaged combined score for those metrics, considering any errors, was 77 percent. In the majority of tests, the researchers’ model outperformed nearly all other models.

Obviously detection is only part of the process but this robo-therapist could help real therapists find and isolate issues automatically versus the long process of analysis. It’s a fascinating step forward in mental health.


Source: The Tech Crunch

Read More