Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Startups net more than capital with NBA players as investors

Posted by on Jun 1, 2019 in Alexa, Andre Iguodala, Basketball, Carmelo Anthony, Column, Dan Porter, david stern, Facebook, Golden State Warriors, Google, Kevin Durant, Messenger, national basketball association, NBA, overtime, player, SMS, Snap, Snapchat, snaptravel, Social Media, Spark Capital, Startups, stephen curry, TC, Telstra Ventures, toronto, twitch | 0 comments

If you’re a big basketball fan like me, you’ll be glued to the TV watching the Golden State Warriors take on the Toronto Raptors in the NBA finals. (You might be surprised who I’m rooting for.)

In honor of the big games, we took a shot at breaking down investment activities of the players off the court. Last fall, we did a story highlighting some of the sport’s more prolific investors. In this piece, we’ll take a deeper dive into just what having an NBA player as a backer can do for a startup beyond the capital involved. But first, here’s a chart of some startups funded by NBA players, both former and current.

 

In February, we covered how digital sports media startup Overtime had raised $23 million in a Series B round of funding led by Spark Capital. Former NBA Commissioner David Stern was an early investor and advisor in the company (putting money in the company’s seed round). Golden State Warriors player Kevin Durant invested as part of the company’s Series A in early 2018 via his busy investment vehicle, Thirty Five Ventures. And then, Carmelo Anthony invested (via his Melo7 Tech II fund) earlier this year. Other NBA-related investors include Baron DavisAndre Iguodala and Victor Oladipo, and other non-NBA backers include Andreessen Horowitz and Greycroft.

I talked to Overtime’s CEO, 27-year-old Zack Weiner, about how the involvement of so many NBA players came about. I also wondered what they brought to the table beyond their cash. But before we get there, let me explain a little more about what Overtime does.

Founded in late 2016 by Dan Porter and Weiner, the Brooklyn company has raised a total of $35.3 million. The pair founded the company after observing “how larger, legacy media companies, such as ESPN, were struggling” with attracting the younger viewer who was tuning into the TV less and less “and consuming sports in a fundamentally different way.”

So they created Overtime, which features about 25 to 30 sports-related shows across several platforms (which include YouTube, Snapchat, Instagram, Facebook, TikTok, Twitter and Twitch) aimed at millennials and the Gen Z generation. Weiner estimates the company’s programs get more than 600 million video views every month.

In terms of attracting NBA investors, Weiner told me each situation was a little different, but with one common theme: “All of them were fans of Overtime before we even met them…They saw what we were doing as the new wave of sports media and wanted to get involved. We didn’t have to have 10 meetings for them to understand what we were doing. This is the world they live and breathe.”

So how is having NBA players as investors helping the company grow? Well, for one, they can open a lot of doors, noted Weiner.

“NBA players are very powerful people and investors,” he said. “They’ve helped us make connections in music, fashion and all things tangential to sports. Some have created content with us.”

In addition, their social clout has helped with exposure. Their posting or commenting on Instagram gives the company credibility, Weiner said.

“Also just, in general, getting their perspectives and opinions,” he added. “A lot of our content is based on working with athletes, so they understand what athletes want and are interested in being a part of.”

It’s not just sports-related startups that are attracting the interest of NBA players. I also talked with Hussein Fazal, the CEO of SnapTravel, which recently closed a $21.2 million Series A that included participation from Telstra Ventures and Golden State Warriors point guard Stephen Curry.

Founded in 2016, Toronto-based SnapTravel offers online hotel booking services over SMS, Facebook Messenger, Alexa, Google Home and Slack. It’s driven more than $100 million in sales, according to Fazal, and is seeing its revenue grow about 35% quarter over quarter.

Like Weiner, Fazal told me that Curry’s being active on social media about SnapTravel helped draw positive attention and “add a lot of legitimacy” to his company.

“If you’re an end-consumer about to spend $1,000 on a hotel booking, you might be a little hesitant about trusting a newer brand like ours,” he said. “But if they go to our home page and see our investors, that holds some weight in the eyes of the public, and helps show we’re not a fly-by-night company.”

Another way Curry’s involvement has helped SnapTravel is in terms of the recruitment and retainment of employees. Curry once spent hours at the office, meeting with employees and doing a Q&A.

“It was really cool,” Fazal said. “And it helps us stand out from other startups when hiring.”

Regardless of who wins the series, it’s clear that startups with NBA investors on their team have a competitive advantage. (Still, Go Raptors!)


Source: The Tech Crunch

Read More

Tufts expelled a student for grade hacking. She claims innocence

Posted by on Mar 8, 2019 in Connecticut, Education, law enforcement, malwarebytes, North America, Security, toronto, tufts university, Xiaomi | 0 comments

As she sat in the airport with a one-way ticket in her hand, Tiffany Filler wondered how she would pick up the pieces of her life, with tens of thousands of dollars in student debt and nothing to show for it.

A day earlier, she was expelled from Tufts University veterinary school. As a Canadian, her visa was no longer valid and she was told by the school to leave the U.S. “as soon as possible.” That night, her plane departed the U.S. for her native Toronto, leaving any prospect of her becoming a veterinarian behind.

Filler, 24, was accused of an elaborate months-long scheme involving stealing and using university logins to break into the student records system, view answers, and alter her own and other students’ grades.

The case Tufts presented seems compelling, if not entirely believable.

There’s just one problem: In almost every instance that the school accused Filler of hacking, she was elsewhere with proof of her whereabouts or an eyewitness account and without the laptop she’s accused of using. She has alibis: fellow students who testified to her whereabouts; photos with metadata putting her miles away at the time of the alleged hacks; and a sleep tracker that showed she was asleep during others.

Tufts is either right or it expelled an innocent student on shoddy evidence four months before she was set to graduate.

– – –

Guilty until proven innocent

Tiffany Filler always wanted to be a vet.

Ever since she was a teenager, she set her sights on her future career. With almost four years under her belt at Tufts, which is regarded as one of the best schools for veterinary medicine in North America, she could have written her ticket to any practice. Her friends hold her in high regard, telling me that she is honest and hardworking. She kept her head down, earning cumulative grade point averages of 3.9 for her masters and 3.5 for her doctorate.

For a time, she was even featured on the homepage of Tufts’ vet school. She was a model final-year student.

Tufts didn’t see it that way.

Filler was called into a meeting on the main campus on August 22 where the university told her of an investigation. She had “no idea” about the specifics of the hacking allegations, she told me on a phone call, until October 18 when she was pulled out of her shift, still in her bloodied medical scrubs, to face the accusations from the ethics and grievance committee.

For three hours, she faced eight senior academics, including one who is said to be a victim of her alleged hacks. The allegations read like a court docket, but Filler said she went in knowing nothing that she could use to defend herself.

Tufts said she stole a librarian’s password to assign a mysteriously created user account, “Scott Shaw,” with a higher level of system and network access. Filler allegedly used it to look up faculty accounts and reset passwords by swapping out the email address to one she’s accused of controlling, or in some cases obtaining passwords and bypassing the school’s two-factor authentication system by exploiting a loophole that simply didn’t require a second security check, which the school has since fixed.

Tufts accused Filler of using this extensive system access to systematically log in as “Scott Shaw” to obtain answers for tests, taking the tests under her own account, said to be traced from either her computer — based off a unique identifier, known as a MAC address — and the network she allegedly used, either the campus’s wireless network or her off-campus residence. When her grades went up, sometimes other students’ grades went down, the school said.

In other cases, she’s alleged to have broken into the accounts of several assessors in order to alter existing grades or post entirely new ones.

Tiffany Filler, left, with her mother in a 2017 photo at Tufts University.

The bulk of the evidence came from Tufts’ IT department, which said each incident was “well supported” from log files and database records. The evidence pointed to her computer over a period of several months, the department told the committee.

“I thought due process was going to be followed,” said Filler, in a call. “I thought it was innocent until proven guilty until I was told ‘you’re guilty unless you can prove it.’”

Like any private university, Tufts can discipline — even expel — a student for almost any reason.

“Universities can operate like shadow criminal justice systems — without any of the protections or powers of a criminal court,” said Samantha Harris, vice president of policy research at FIRE, a rights group for America’s colleges and universities. “They’re without any of the due process protections for someone accused of something serious, and without any of the powers like subpoenas that you’d need to gather all of the technical evidence.”

Students face an uphill battle in defense of any charges of wrongdoing. As was the case with Filler, many students aren’t given time to prepare for hearings, have no right to an attorney, and are not given any or all of the evidence. Some of the broader charges, such as professional misconduct or ethical violations, are even harder to fight. Grade hacking is one such example — and one of the most serious offenses in academia. Where students have been expelled, many have also faced prosecution and the prospect of serving time in prison on federal computer hacking charges.

Harris reviewed documents we provided outlining the university’s allegations and Filler’s appeal.

“It’s troubling when I read her appeal,” said Harris. “It looks as though [the school has] a lot of information in their sole possession that she might try to use to prove her innocent, and she wasn’t given access to that evidence.”

Access to the university’s evidence, she said, was “critical” to due process protections that students should be given, especially when facing suspension or expulsion.

A month later, the committee served a unanimous vote that Filler was the hacker and recommended her expulsion.

– – –

A RAT in the room

What few facts Filler and Tufts could agree on is that there almost certainly was a hacker. They just disagreed on who the hacker was.

Struggling for answers and convinced her MacBook Air — the source of the alleged hacks — was itself compromised, she paid for someone through freelance marketplace Fiverr to scan her computer. Within minutes, several malicious files were found, chief among which were two remote access trojans — or RATs — commonly used by jilted or jealous lovers to spy on their exes’ webcams and remotely control their computers over the internet. The scan found two: Coldroot and CrossRAT. The former is easily deployed, and the other is highly advanced malware, said to be linked to the Lebanese government.

Evidence of a RAT might suggest someone had remote control of her computer without her knowledge. But existence of both on the same machine, experts say, is unlikely if not entirely implausible.

Thomas Reed, director of Mac and Mobile at Malwarebytes, the same software used to scan Filler’s computer, confirmed the detections but said there was no conclusive evidence to show the malware was functional.

“The Coldroot infection was just the app and was missing the launch daemon that would have been key to keeping it running,” said Reed.

Even if it were functional, how could the hacker have framed her? Could Filler have paid someone to hack her grades? If she paid someone to hack her grades, why implicate her — and potentially the hacker — by using her computer? Filler said she was not cautious about her own cybersecurity — insofar that she pinned her password to a corkboard in her room. Could this have been a stitch-up? Was someone in her house trying to frame her?

The landlord told me a staff resident at Tufts veterinary school, who has since left the house, “has bad feelings” and “anger” toward Filler. The former housemate may have motive but no discernible means. We reached out to the former housemate for comment but did not hear back, and therefore are not naming the person.

Filler took her computer to an Apple Store, claiming the “mouse was acting on its own and the green light for the camera started turning on,” she said. The support staff backed up her files but wiped her computer, along with any evidence of malicious software beyond a handful of screenshots she took as part of the dossier of evidence she submitted in her appeal.

It didn’t convince the grievance committee of possible malicious interference.

“Feedback from [IT] indicated that these issues with her computer were in no way related to the alleged allegations,” said Angie Warner, the committee’s acting chair, in an email we’ve seen, recommending Filler’s expulsion. Citing an unnamed IT staffer, the department claimed with “high degree of certainty” that it was “highly unlikely” that the grade changes were “performed by malicious software or persons without detailed and extensive hacking ability.”

Unable to prove who was behind the remote access malware — or even if it was active — she turned back to fighting her defense.

– – –

‘Why wait?’

It took more than a month before Filler would get the specific times of the alleged hacks, revealing down to the second when each breach happened

Filler thought she could convince the committee that she wasn’t the hacker, but later learned that the timings “did not factor” into the deliberations of the grievance committee, wrote Tufts’ veterinary school dean Joyce Knoll in an email dated December 21.

But Filler said she could in all but a handful of cases provide evidence showing that she was not at her computer.

In one of the first allegations of hacking, Filler was in a packed lecture room, with her laptop open, surrounded by her fellow vet school colleagues both besides and behind her. We spoke to several students who knew Filler — none wanted to be named for fear of retribution from Tufts — who wrote letters to testify in Filler’s defense.

All of the students we spoke to said they were never approached by Tufts to confirm or scrutinize their accounts. Two other classmates who saw Filler’s computer screen during the lecture told me they saw nothing suspicious — only her email or the lecture slides.

Another time Filler is accused of hacking, she was on rounds with other doctors, residents and students to discuss patients in their care. One student said Filler was “with the entire rotation group and the residents, without any access to a computer” for two hours.

For another accusation, Filler was out for dinner in a neighboring town. “She did not have her laptop with her,” said one of the fellow student who was with Filler at dinner. The other students sent letters to Tufts in her defense. Tufts said on that occasion, her computer — eight miles away from the restaurant — was allegedly used to access another staff member’s login and tried to bypass the two-factor authentication, using an iPhone 5S, a model Filler doesn’t own. Filler has an iPhone 6. (We asked an IT systems administrator at another company about Duo audit logs: They said if a device not enrolled with Duo tried to enter a valid username and password but couldn’t get past the two-factor prompt, the administrator would only see the device’s software version and not see the device type. A Duo spokesperson confirmed that the system does not collect device names.)

Filler, who wears a Xiaomi fitness and sleep tracker, said the tracker’s records showed she was asleep in most, but not all of the times she’s accused of hacking. She allowed TechCrunch to access the data in her cloud-stored account, which confirmed her accounts.

The list of accusations included a flurry of activity from her computer at her residence, Tufts said took place between 1am and 2am on June 27, 2018 — during which her fitness tracker shows she was asleep — and from 5:30 p.m. and 6:30 p.m. on June 28, 2018.

But Filler was 70 miles away visiting the Mark Twain House in neighboring Hartford, Connecticut. She took two photos of her visit — one of her in the house, and another of her standing outside.

We asked Jake Williams, a former NSA hacker who founded cybersecurity and digital forensics firm Rendition Infosec, to examine the metadata embedded in the photos. The photos, taken from her iPhone, contained a matching date and time for the alleged hack, as well as a set of coordinates putting her at the Mark Twain House.

While photo metadata can be modified, Williams said the signs he expected to see for metadata modification weren’t there. “There is no evidence that these were modified,” he said.

Yet none of it was good enough to keep her enrolled at Tufts. In a letter on January 16 affirming her expulsion, Knoll rejected the evidence.

“Date stamps are easy to edit,” said Knoll. “In fact, the photos you shared with me clearly include an ‘edit’ button in the upper corner for this exact purpose,” she wrote, referring to the iPhone software’s native photo editing feature. “Why wait until after you’d been informed that you were going to be expelled to show me months’ old photos?” she said.

“My decision is final,” said her letter. Filler was expelled.

Filler’s final expulsion letter. (Image: supplied)

– – –

The little things

Filler is back home in Toronto. As her class is preparing to graduate without her in May, Tufts has already emailed her to begin reclaiming her loans.

News of Filler’s expulsion was not unexpected given the drawn-out length of the investigation, but many were stunned by the result, according to the students we spoke to. From the time of the initial investigation, many believed Filler would not escape the trap of “guilty until proven innocent.”

“I do not believe Tiffany received fair treatment,” said one student. “As a private institution, it seems like we have few protections [or] ways of recourse. If they could do this to Tiffany, they could do it to any of us.”

TechCrunch sent Tufts a list of 19 questions prior to publication — including if the university hired qualified forensics specialists to investigate, and if law enforcement was contacted and whether the school plans to press criminal charges for the alleged hacking.

“Due to student privacy concerns, we are not able to discuss disciplinary matters involving any current or former student of Cummings School of Veterinary Medicine at Tufts University,” said Tara Pettinato, a Tufts spokesperson. “We take seriously our responsibility to ensure our students’ privacy, to maintain the highest standards of academic integrity, and to adhere to our policies and processes, which are designed to be fair and equitable to all students.”

We asked if the university would answer our questions if Filler waived her right to privacy. The spokesperson said the school “is obligated to follow federal law and its own standards and practices relating to privacy,” and would not discuss disciplinary matters involving any current or former student.

The spokesperson declined to comment further.

But even the little things don’t add up.

Tufts never said how it obtained her IP address. Her landlord told me Tufts never asked for it, let alone confirmed it was accurate. Courts have thrown out cases that rely on them as evidence when others share the same network. MAC addresses can identify devices but can be easily spoofed. Filler owns an iPhone 6, not an iPhone 5S, as claimed by Tufts. And her computer name was different to what Tufts said.

And how did she allegedly get access to the “Scott Shaw” password in the first place?

Warner, the committee chair, said in a letter that the school “does not know” how the initial librarian’s account was compromised, and that it was “irrelevant” if Filler even created the “Scott Shaw” account.

Many accounts were breached as part of this apparent elaborate scheme to alter grades, but there is no evidence Tufts hired any forensics experts to investigate. Did the IT department investigate with an inherent confirmation bias to try to find evidence that connected Filler’s account with the suspicious activity, or were the allegations constructed after Filler was identified as a suspect? And why did the university take months from the first alleged hack to move to protect user accounts with two-factor authentication, and not sooner?

“The data they are looking at doesn’t support the conclusions they’ve drawn,” said Williams, following his analysis of the case. “It’s entirely possible that the data they’re relying on — is far from normal or necessary burdens of evidence that you would use for an adverse action like this.

“They did DIY forensics,” he continued. “And they opened themselves up to legal exposure by doing the investigation themselves.”

Not every story has a clear ending. This is one of them. As much as you would want answers reading this far into the story, we do, too.

But we know two things for certain. First, Tufts expelled a student months before she was set to graduate based on a broken system of academic-led, non-technical committees forced to rely on weak evidence from IT technicians who had no discernible qualifications in digital forensics. And second, it doesn’t have to say why.

Or as one student said: “We got her side of the story, and Tufts was not transparent.”

Extra Crunch members — join our conference call on Tuesday, March 12 at 11AM PST / 2PM EST with host Zack Whittaker. He’ll discuss the story’s developments and take your questions. Not a member yet? Learn more about Extra Crunch and try it free.

Read more on TechCrunch:


Source: The Tech Crunch

Read More

Jam City is setting up a Toronto shop by buying Bingo Pop from Uken Games

Posted by on Nov 28, 2018 in 20th Century Fox, bingo, buenos aires, Chris DeWolfe, computing, Disney, Electronic Arts, Entertainment, entertainment software association, Gaming, harry potter, jam city, Los Angeles, MySpace, san diego, San Francisco, toronto | 0 comments

The Los Angeles game development studio Jam City is setting up a shop in Toronto with the acquisition of Bingo Pop from Uken Games.

Terms of the deal weren’t disclosed.

The deal is part of a broader effort to expand the Jam City portfolio of games and geographic footprint. In recent months the company has inked agreements with Disney — taking over development duties on some of the company’s games like Disney Emoji Blitz and signing on to develop new ones — and launching new games in conjunction with other famous franchises like Harry Potter.

The Bingo Pop acquisition will bring a gambling game into the casual game developer’s stable of titles that pulled in roughly $700,000 in revenue through October, according to data from SensorTower.

“We are so proud to be continuing Jam City’s rapid global expansion with the acquisition of one of the most popular bingo titles, and its highly talented team,” said Chris DeWolfe, co-founder and CEO of Jam City, in a statement. “This acquisition provides Jam City with access to leading creative talent in one of the fastest growing and most exciting tech markets in the world. We look forward to working with the talented Jam City team in Toronto as we supercharge the live operations of Bingo Pop and develop innovative new titles and mobile entertainment experiences.”

Founded in Los Angeles in 2009 by DeWolfe, who previously helped create and launch Myspace, and 20th Century Fox exec Josh Yguado, Jam City rose to prominence on the back of its Cookie Jam and Panda Pop games. Now, the company has expanded through licensing deals with Harry Potter, Family Guy, Marvel and now Disney. Jam City has offices in Los Angeles, San Francisco, San Diego, Bogota and Buenos Aires.


Source: The Tech Crunch

Read More

Integrate.ai pulls in $30M to help businesses make better customer-centric decisions

Posted by on Sep 12, 2018 in Advertising Tech, Artificial Intelligence, bias, business intelligence, Canada, deep learning, ethics, Facebook, fairness, Fundings & Exits, Georgian Partners, InfoSum, Integrate.ai, machine learning, Portag3 Ventures, Privacy, Real Ventures, SaaS, social web, TC, toronto | 3 comments

Helping businesses bring more firepower to the fight against AI-fuelled disruptors is the name of the game for Integrate.ai, a Canadian startup that’s announcing a $30M Series A today.

The round is led by Portag3 Ventures . Other VCs include Georgian Partners, Real Ventures, plus other (unnamed) individual investors also participating. The funding will be used for a big push in the U.S. market.

Integrate.ai’s early focus has been on retail banking, retail and telcos, says founder Steve Irvine, along with some startups which have data but aren’t necessarily awash with AI expertise to throw at it. (Not least because tech giants continue to hoover up talent.)

Its SaaS platform targets consumer-centric businesses — offering to plug paying customers into a range of AI technologies and techniques to optimize their decision-making so they can respond more savvily to their customers. Aka turning “high volume consumer funnels” into “flywheels”, if that’s a mental image that works for you.

In short it’s selling AI pattern spotting insights as a service via a “cloud-based AI intelligence platform” — helping businesses move from “largely rules-based decisioning” to “more machine learning-based decisioning boosted by this trusted signals exchange of data”, as he puts it.

Irvine gives the example of a large insurance aggregator the startup is working with to optimize the distribution of gift cards and incentive discounts to potential customers — with the aim of maximizing conversions.

“Obviously they’ve got a finite amount of budget for those — they need to find a way to be able to best deploy those… And the challenge that they have is they don’t have a lot of information on people as they start through this funnel — and so they have what is a classic ‘cold start’ problem in machine learning. And they have a tough time allocating those resources most effectively.”

“One of the things that we’ve been able to help them with is to, essentially, find the likelihood of those people to be able to convert earlier by being able to bring in some interesting new signal for them,” he continues. “Which allows them to not focus a lot of their revenue or a lot of those incentives on people who either have a low likelihood of conversion or are most likely to convert. And they can direct all of those resources at the people in the middle of the distribution — where that type of a nudge, that discount, might be the difference between them converting or not.”

He says feedback from early customers suggests the approach has boosted profitability by around 30% on average for targeted business areas — so the pitch is businesses are easily seeing the SaaS easily paying for itself. (In the cited case of the insurer, he says they saw a 23% boost in performance — against what he couches as already “a pretty optimized funnel”.)

“We find pretty consistent [results] across a lot of the companies that we’re working with,” he adds. “Most of these decisions today are made by a CRM system or some other more deterministic software system that tends to over attribute people that are already going to convert. So if you can do a better job of understanding people’s behaviour earlier you can do a better job at directing those resources in a way that’s going to drive up conversion.”

The former Facebook marketing exec, who between 2014 and 2017 ran a couple of global marketing partner programs at Facebook and Instagram, left the social network at the start of last year to found the business — raising $9.6M in seed funding in two tranches, according to Crunchbase.

The eighteen-month-old Toronto based AI startup now touts itself as one of the fastest growing companies in Canadian history, with a headcount of around 40 at this point, and a plan to grow staff 3x to 4x over the next 12 months. Irvine is also targeting growing revenue 10x, with the new funding in place — gunning to carve out a leadership position in the North American market.

One key aspect of Integrate.ai’s platform approach means its customers aren’t only being helped to extract more and better intel from their own data holdings, via processes such as structuring the data for AI processing (though Irvine says it’s also doing that).

The idea is they also benefit from the wider network, deriving relevant insights across Integrate.ai’s pooled base of customers — in a way that does not trample over privacy in the process. At least, that’s the claim.

(It’s worth noting Integrate.ai’s network is not a huge one yet, with customers numbering in the “tens” at this point — the platform only launched in alpha around 12 months ago and remains in beta now. Named customers include the likes of Telus, Scotiabank, and Corus.)

So the idea is to offer an alternative route to boost business intelligence vs the “traditional” route of data-sharing by simply expanding databases — because, as Irvine points out, literal data pooling is “coming under fire right now — because it is not in the best interests, necessarily, of consumers; there’s some big privacy concerns; there’s a lot of security risk which we’re seeing show up”.

What exactly is Integrate.ai doing with the data then? Irvine says its Trusted Signals Exchange platform uses some “pretty advanced techniques in deep learning and other areas of machine learning to be able to transfer signals or insights that we can gain from different companies such that all the companies on our platform can benefit by delivering more personalized, relevant experiences”.

“But we don’t need to ever, kind of, connect data in a more traditional way,” he also claims. “Or pull personally identifiable information to be able to enable it. So it becomes very privacy-safe and secure for consumers which we think is really important.”

He further couches the approach as “pretty unique”, adding it “wouldn’t even have been possible probably a couple of years ago”.

From Irvine’s description the approach sounds similar to the data linking (via mathematical modelling) route being pursued by another startup, UK-based InfoSum — which has built a platform that extracts insights from linked customer databases while holding the actual data in separate silos. (And InfoSum, which was founded in 2016, also has a founder with a behind-the-scenes’ view on the inners workings of the social web — in the form of Datasift’s Nic Halstead.)

Facebook’s own custom audiences product, which lets advertisers upload and link their customer databases with the social network’s data holdings for marketing purposes is the likely inspiration behind all these scenes.

Irvine says he spotted the opportunity to build this line of business having been privy to a market overview in his role at Facebook, meeting with scores of companies in his marketing partner role and getting to hear high level concerns about competing with tech giants. He says the Facebook job also afforded him an overview on startup innovation — and there he spied a gap for Integrate.ai to plug in.

“My team was in 22 offices around the world, and all the major tech hubs, and so we got a chance to see any of the interesting startups that were getting traction pretty quickly,” he tells TechCrunch. “That allowed us to see the gaps that existed in the market. And the biggest gap that I saw… was these big consumer enterprises needed a way to use the power of AI and needed access to third party data signals or insights to be able to enabled them to transition to this more customer-centric operating model to have any hope of competing with the large digital disruptors like Amazon.

“That was kind of the push to get me out of Facebook, back from California to Toronto, Canada, to start this company.”

Again on the privacy front, Irvine is a bit coy about going into exact details about the approach. But is unequivocal and emphatic about how ad tech players are stepping over the line — having seen into that pandora’s box for years — so his rational to want to do things differently at least looks clear.

“A lot of the techniques that we’re using are in the field of deep learning and transfer learning,” he says. “If you think about the ultimate consumer of this data-sharing, that is insight sharing, it is at the end these AI systems or models. Meaning that it doesn’t need to be legible to people as an output — all we’re really trying to do is increase the map; make a better probabilistic decision in these circumstances where we might have little data or not the right data that we need to be able to make the right decision. So we’re applying some of the newer techniques in those areas to be able to essentially kind of abstract away from some of the more sensitive areas, create representations of people and patterns that we see between businesses and individuals, and then use that as a way to deliver a more personalized predictions — without ever having to know the individual’s personally identifiable information.”

“We do do some work with differential privacy,” he adds when pressed further on the specific techniques being used. “There’s some other areas that are just a little bit more sensitive in terms of the work that we’re doing — but a lot of work around representative learning and transfer learning.”

Integrate.ai has published a whitepaper — for a framework to “operationalize ethics in machine learning systems” — and Irvine says it’s been called in to meet and “share perspectives” with regulators based on that.

“I think we’re very GDPR-friendly based on the way that we have thought through and constructed the platform,” he also says when asked whether the approach would be compliant with the European Union’s tough new privacy framework (which also places some restrictions on entirely automated decisions when they could have a significant impact on individuals).

“I think you’ll see GDPR and other regulations like that push more towards these type of privacy preserving platforms,” he adds. “And hopefully away from a lot of the really creepy, weird stuff that is happening out there with consumer data that I think we all hope gets eradicated.”

For the record, Irvine denies any suggestion that he was thinking of his old employer when he referred to “creepy, weird stuff” done with people’s data — saying: “No, no, no!”

“What I did observe when I was there in ad tech in general, I think if you look at that landscape, I think there are many, many… worse examples of what is happening out there with data than I think the ones that we’re seeing covered in the press. And I think as the light shines on more of that ecosystem of players, I think we will start to see that the ways they’ve thought about data, about collection, permissioning, usage, I think will change drastically,” he adds.

“And the technology is there to be able to do it in a much more effective way without having to compromise results in too big a way. And I really hope that that sea change has already started — and I hope that it continues at a much more rapid pace than we’ve seen.”

But while privacy concerns might be reduced by the use of an alternative to traditional data-pooling, depending on the exact techniques being used, additional ethical considerations are clearly being dialled sharply into view if companies are seeking to supercharge their profits by automating decision making in sensitive and impactful areas such as discounts (meaning some users stand to gain more than others).

The point is an AI system that’s expert at spotting the lowest hanging fruit (in conversion terms) could start selectively distributing discounts to a narrow sub-section of users only — meaning other people might never even be offered discounts.

In short, it risks the platform creating unfair and/or biased outcomes.

Integrate.ai has recognized the ethical pitfalls, and appears to be trying to get ahead of them — hence its aforementioned ‘Responsible AI in Consumer Enterprise’ whitepaper.

Irvine also says that raising awareness around issues of bias and “ethical AI” — and promoting “more responsible use and implementation” of its platform is another priority over the next twelve months.

“The biggest concern is the unethical treatment of people in a lot of common, day-to-day decisions that companies are going to be making,” he says of problems attached to AI. “And they’re going to do it without understanding, and probably without bad intent, but the reality is the results will be the same — which is perpetuating a lot of biases and stereotypes of the past. Which would be really unfortunate.

“So hopefully we can continue to carve out a name, on that front, and shift the industry more to practices that we think are consistent with the world that we want to live in vs the one we might get stuck in.”

The whitepaper was produced by a dedicated internal team, which he says focuses on AI ethics and fairness issues, and is headed up by VP of product & strategy, Kathryn Hume.

“We’re doing a lot of research now with the Vector Institute for AI… on fairness in our AI models, because what we’ve seen so far is that — if left unattended, if all we did was run these models and not adjust for some of the ethical considerations — we would just perpetuate biases that we’ve seen in the historical data,” he adds.

“We would pick up patterns that are more commonly associated with maybe reinforcing particular stereotypes… so we’re putting a really dedicated effort — probably abnormally large, given our size and stage — towards leading in this space, and making sure that that’s not the outcome that gets delivered through effective use of a platform like ours. But actually, hopefully, the total opposite: You have a better understanding of where those biases might creep in and they could be adjusted for in the models.”

Combating unfairness in this type of AI tool would mean a company having to optimize conversion performance a bit less than it otherwise could.

Though Irvine suggests that’s likely just in the short term. Over the longer term he argues you’re laying the foundations for greater growth — because you’re building a more inclusive business, saying: “We have this conversational a lot. “I think it’s good for business, it’s just the time horizon that you might think about.”

“We’ve got this window of time right now, that I think is a really precious window, where people are moving over from more deterministic software systems to these more probabilistic, AI-first platforms… They just operate much more effectively, and they learn much more effectively, so there will be a boost in performance no matter what. If we can get them moved over right off the bat onto a platform like ours that has more of an ethical safeguard, then they won’t notice a drop off in performance — because it’ll actually be better performance. Even if it’s not optimized fully for short term profitability,” he adds.

“And we think, over the long term it’s just better business if you’re socially conscious, ethical company. We think, over time, especially this new generation of consumers, they start to look out for those things more… So we really hope that we’re on the right side of this.”

He also suggests that the wider visibility afforded by having AI doing the probabilistic pattern spotting (vs just using a set of rules) could even help companies identify unfairnesses they don’t even realize might be holding their businesses back.

“We talk a lot about this concept of mutual lifetime value — which is how do we start to pull in the signals that show that people are getting value in being treated well, and can we use those signals as part of the optimization. And maybe you don’t have all the signal you need on that front, and that’s where being able to access a broader pool can actually start to highlight those biases more.”


Source: The Tech Crunch

Read More