Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Vizion.ai launches its managed Elasticsearch service

Posted by on Mar 28, 2019 in Amazon Web Services, api, Artificial Intelligence, Caching, cloud computing, computing, Developer, Elastic, Elasticsearch, Enterprise, ML, TC, world wide web | 0 comments

Setting up Elasticsearch, the open-source system that many companies large and small use to power their distributed search and analytics engines, isn’t the hardest thing. What is very hard, though, is to provision the right amount of resources to run the service, especially when your users’ demand comes in spikes, without overpaying for unused capacity. Vizion.ai’s new Elasticsearch Service does away with all of this by essentially offering Elasticsearch as a service and only charging its customers for the infrastructure they use.

Vizion.ai’s service automatically scales up and down as needed. It’s a managed service and delivered as a SaaS platform that can support deployments on both private and public clouds, with full API compatibility with the standard Elastic stack that typically includes tools like Kibana for visualizing data, Beats for sending data to the service and Logstash for transforming the incoming data and setting up data pipelines. Users can easily create several stacks for testing and development, too, for example.

Vizion.ai GM and VP Geoff Tudor

“When you go into the AWS Elasticsearch service, you’re going to be looking at dozens or hundreds of permutations for trying to build your own cluster,” Vision.ai’s VP and GM Geoff Tudor told me. “Which instance size? How many instances? Do I want geographical redundancy? What’s my networking? What’s my security? And if you choose wrong, then that’s going to impact the overall performance. […] We do balancing dynamically behind that infrastructure layer.” To do this, the service looks at the utilization patterns of a given user and then allocates resources to optimize for the specific use case.

What VVizion.ai hasdone here is take some of the work from its parent company Panzura, a multi-cloud storage service for enterprises that has plenty of patents around data caching, and applied it to this new Elasticsearch service.

There are obviously other companies that offer commercial Elasticsearch platforms already. Tudor acknowledges this, but argues that his company’s platform is different. With other products, he argues, you have to decide on the size of your block storage for your metadata upfront, for example, and you typically want SSDs for better performance, which can quickly get expensive. Thanks to Panzura’s IP, Vizion.ai is able to bring down the cost by caching recent data on SSDs and keeping the rest in cheaper object storage pools.

He also noted that the company is positioning the overall Vizion.ai service, with the Elasticsearch service as one of the earliest components, as a platform for running AI and ML workloads. Support for TensorFlow, PredictionIO (which plays nicely with Elasticsearch) and other tools is also in the works. “We want to make this an easy serverless ML/AI consumption in a multi-cloud fashion, where not only can you leverage the compute, but you can also have your storage of record at a very cost-effective price point.”


Source: The Tech Crunch

Read More

Car alarms with security flaws put 3 million vehicles at risk of hijack

Posted by on Mar 8, 2019 in Alarms, api, Automotive, California, computer security, founder, Security, United Kingdom | 0 comments

Two popular car alarm systems have fixed security vulnerabilities that allowed researchers to remotely track, hijack and take control of vehicles with the alarms installed.

The systems, built by Russian alarm maker Pandora and California-based Viper — or Clifford in the U.K., were vulnerable to an easily manipulated server-side API, according to researchers at Pen Test Partners, a U.K. cybersecurity company. In their findings, the API could be abused to take control of an alarm system’s user account — and their vehicle.

It’s because the vulnerable alarm systems could be tricked into resetting an account password because the API was failing to check if it was an authorized request, allowing the researchers to log in.

Although the researchers bought alarms to test, they said “anyone” could create a user account to access any genuine account or extract all the companies’ user data.

The researchers said some three million cars globally were vulnerable to the flaws, since fixed.

In one example demonstrating the hack, the researchers geolocated a target vehicle, track it in real-time, follow it, remotely kill the engine and force the car to stop, and unlock the doors. The researchers said it was “trivially easy” to hijack a vulnerable vehicle. Worse, it was possible to identify some car models, making targeted hijacks or high-end vehicles even easier.

According to their findings, the researchers also found they could listen in on the in-car microphone, built-in as part of the Pandora alarm system for making calls to the emergency services or roadside assistance.

Ken Munro, founder of Pen Test Partners, told TechCrunch this was their “biggest” project.

The researchers contacted both Pandora and Viper with a seven-day disclosure period, given the severity of the vulnerabilities. Both companies responded quickly to fix the flaws.

When reached, Viper’s Chris Pearson confirmed the vulnerability has been fixed. “If used for malicious purposes, [the flaw] could allow customer’s accounts to be accessed without authorization.”

Viper blamed a recent system update by a service provider for the bug and said the issue was “quickly rectified.”

“Directed believes that no customer data was exposed and that no accounts were accessed without authorization during the short period this vulnerability existed,” said Pearson, but provided no evidence to how the company came to that conclusion.

In a lengthy email, Pandora’s Antony Noto challenged several of the researcher’s findings, summated: “The system’s encryption was not cracked, the remotes where not hacked, [and] the tags were not cloned,” he said. “A software glitch allowed temporary access to the device for a short period of time, which has now been addressed.”

The research follows work last year by Vangelis Stykas on the Calamp, a telematics provider that serves as the basis for Viper’s mobile app. Stykas, who later joined Pen Test Partners and also worked on the car alarm project, found the app was using credentials hardcoded in the app to login to a central database, which gave anyone who logged in remote control of a connected vehicle.


Source: The Tech Crunch

Read More

Gaming clips service Medal has bought Donate Bot for direct donations and payments

Posted by on Mar 5, 2019 in api, bot, computing, discord, E-Commerce, freeware, Gaming, M&A, operating systems, Patreon, PayPal, Shopify, social media platforms, Software, Steam, subscription services, TC, Twitter | 0 comments

The Los Angeles-based video gaming clipping service Medal has made its first acquisition as it rolls out new features to its user base.

The company has acquired the Discord -based donations and payments service Donate Bot to enable direct payments and other types of transactions directly on its site.

Now, the company is rolling out a service to any Medal user with more than 100 followers, allowing them to accept donations, subscriptions and payments directly from their clips on mobile, web, desktop and through embedded clips, according to a blog post from company founder Pim De Witte.

For now, and for at least the next year, the service will be free to Medal users — meaning the company won’t take a dime of any users’ revenue made through payments on the platform.

For users who already have a storefront up with Patreon, Shopify, Paypal.me, Streamlabs or ko-fi, Medal won’t wreck the channel — integrating with those and other payment processing systems.

Through the Donate Bot service any user with a discord server can generate a donation link, which can be customized to become more of a customer acquisition funnel for teams or gamers that sell their own merchandise.

Webhooks API gives users a way to add donors to various list or subscription services or stream overlays, and the Donate Bot is directly linked with Discord Bot List and Discord Server List as well, so you can accept donations without having to set up a website.

In addition, the company updated its social features, so clips made on Medal can ultimately be shared on social media platforms like Twitter and Discord — and the company is also integrated with Discord, Twitter and Steam in a way to encourage easier signups.


Source: The Tech Crunch

Read More

Amazon stops selling stick-on Dash buttons

Posted by on Mar 1, 2019 in Amazon, amazon dash, api, button, connected objects, Dash, dash button, Dash Replenishment, E-Commerce, eCommerce, Gadgets, Germany, Internet of things, IoT, voice assistant | 0 comments

Amazon has confirmed it’s retired physical stick-on Dash buttons from sale — in favor of virtual alternatives that let Prime Members tap a digital button to reorder a staple product.

It also points to its Dash Replenishment service — which offers an API for device makers wanting to build Internet connected appliances that can automatically reorder the products they need to function — be it cat food, batteries or washing power — as another reason why physical Dash buttons, which launched back in 2015 (costing $5 a pop), are past their sell by date.

Amazon says “hundreds” of IoT devices capable of self-ordering on Amazon have been launched globally to date by brands including Beko, Epson, illy, Samsung and Whirlpool, to name a few.

So why press a physical button when a digital one will do? Or, indeed, why not do away with the need to push a button all and just let your gadgets rack up your grocery bill all by themselves while you get on with the importance business of consuming all the stuff they’re ordering?

You can see where Amazon wants to get to with its “so customers don’t have to think at all about restocking” line. Consumption that entirely removes the consumer’s decision making process from the transactional loop is quite the capitalist wet dream. Though the company does need to be careful about consumer protection rules as it seeks to excise friction from the buying process.

The ecommerce behemoth also claims customers are “increasingly” using its Alexa voice assistant to reorder staples, such as via the Alexa Shopping voice shopping app (Amazon calls it ‘hands free shopping’) that lets people inform the machine about a purchase intent and it will suggest items to buy based on their Amazon order history.

Albeit, it offers no actual usage metrics for Alexa Shopping. So that’s meaningless PR.

A less flashy but perhaps more popular option than ‘hands free shopping’, which Amazon also says has contributed to making physical Dash buttons redundant, is its Subscribe & Save program.

This “lets customers automatically receive their favourite items every month”, as Amazon puts it. It offers an added incentive of discounts that kick in if the user signs up to buy five or more products per month. But the mainstay of the sales pitch is convenience with Amazon touting time saved by subscribing to ‘essentials’ — and time saved from compiling boring shopping lists once again means more time to consume the stuff being bought on Amazon…

In a statement about retiring physical Dash buttons from global sale on February 28, Amazon also confirmed it will continue to support existing Dash owners — presumably until their buttons wear down to the bare circuit board from repeat use.

“Existing Dash Button customers can continue to use their Dash Button devices,” it writes. “We look forward to continuing support for our customers’ shopping needs, including growing our Dash Replenishment product line-up and expanding availability of virtual Dash Buttons.”

So farewell then clunky Dash buttons. Another physical push-button bites the dust. Though plastic-y Dash were quite unlike the classic iPhone home button — always seeming temporary and experimental rather than slick and coolly reassuring. Even so, the end of both buttons points to the need for tech businesses to tool up for the next wave of contextually savvy connected devices. More smarts, and more controllable smarts is key.

Amazon’s statement about ‘shifting focus’ for Dash does not mention potential legal risks around the buttons related to consumer rights challenges — but that’s another angle here.

In January a court in Germany ruled Dash buttons breached local ecommerce rules, following a challenge by a regional consumer watchdog that raised concerns about T&Cs which allow Amazon to substitute a product of a higher price or even a different product entirely than what the consumer had originally selected. The watchdog argued consumers should be provided with more information about price and product before taking the order — and the judges agreed. Though Amazon said it would seek to appeal.

While it’s not clear whether or not that legal challenge contributed to Amazon’s decision to shutter Dash, it’s clear that virtual Dash buttons offer more opportunities for displaying additional information prior to a purchase than a screen-less physical Dash button. So are more easily adaptable to any tightening legal requirements across different markets.

The demise of the physical Dash was reported earlier by CNET.


Source: The Tech Crunch

Read More

Polis, the door-to-door marketer, raises another $2.5 million

Posted by on Feb 26, 2019 in Alexis Ohanian, api, boston, Business, digital advertising, distribution, garry tan, initialized capital, Marketing, NRG Energy, polis, Recent Funding, sales, Semil Shah, Startups, targeted advertising, TC, texas | 0 comments

Polis founder Kendall Tucker began her professional life as a campaign organizer in local Democratic politics, but — seeing an opportunity in her one-on-one conversations with everyday folks — has built a business taking that shoe leather approach to political campaigns to the business world.

Now the company she founded to test her thesis that Americans would welcome back the return of the door-to-door salesperson three years ago is $2.5 million richer thanks to a new round of financing from Initialized Capital (the fund founded by Garry Tan and Reddit co-founder Alexis Ohanian) and Semil Shah’s Haystack.vc.

The Boston-based company currently straddles the line between political organizing tool and new marketing platform — a situation that even its founder admits is tenuous at the moment.

That tension is only exacerbated by the fact that the company is coming off one of its biggest political campaign seasons. Helping to power the get-out-the-vote initiative for Senatorial candidate Beto O’Rourke in Texas, Polis’ software managed the campaign’s outreach effort to 3 million voters across the state.

However, politically focused software and services businesses are risky. Earlier this year the Sean Parker-backed Brigade shut down and there are rumblings that other startups targeting political action may follow suit.

“Essentially, we got really excited about going into the corporate space because online has gotten so nasty,” says Tucker. “And, at the end of the day, digital advertising isn’t as effective as it once was.”

Customer acquisition costs in the digital ad space are rising. For companies like NRG Energy and Inspire Energy (both Polis clients), the cost of acquisitions online can be as much as $300 per person.

Polis helps identify which doors for salespeople to target and works with companies to identify the scripts that are most persuasive for consumers, according to Tucker. The company also monitors for sales success and helps manage the process so customers aren’t getting too many house calls from persistent sales people.

“We do everything through the conversation at the door,” says Tucker. “We do targeting and we do script curation (everything from what script do you use and when do you branch out of scripts) and we have an open API so they can push that out and they run with it through the rest of their marketing.”


Source: The Tech Crunch

Read More

Fabula AI is using social spread to spot ‘fake news’

Posted by on Feb 6, 2019 in Amazon, api, Artificial Intelligence, deep learning, Emerging-Technologies, Europe, European Research Council, Facebook, fake news, Imperial College London, London, machine learning, Mark Zuckerberg, Media, MIT, Myanmar, Social, Social Media, social media platforms, social media regulation, social network, social networks, Startups, TC, United Kingdom | 0 comments

UK startup Fabula AI reckons it’s devised a way for artificial intelligence to help user generated content platforms get on top of the disinformation crisis that keeps rocking the world of social media with antisocial scandals.

Even Facebook’s Mark Zuckerberg has sounded a cautious note about AI technology’s capability to meet the complex, contextual, messy and inherently human challenge of correctly understanding every missive a social media user might send, well-intentioned or its nasty flip-side.

“It will take many years to fully develop these systems,” the Facebook founder wrote two years ago, in an open letter discussing the scale of the challenge of moderating content on platforms thick with billions of users. “This is technically difficult as it requires building AI that can read and understand news.”

But what if AI doesn’t need to read and understand news in order to detect whether it’s true or false?

Step forward Fabula, which has patented what it dubs a “new class” of machine learning algorithms to detect “fake news” — in the emergent field of “Geometric Deep Learning”; where the datasets to be studied are so large and complex that traditional machine learning techniques struggle to find purchase on this ‘non-Euclidean’ space.

The startup says its deep learning algorithms are, by contrast, capable of learning patterns on complex, distributed data sets like social networks. So it’s billing its technology as a breakthrough. (Its written a paper on the approach which can be downloaded here.)

It is, rather unfortunately, using the populist and now frowned upon badge “fake news” in its PR. But it says it’s intending this fuzzy umbrella to refer to both disinformation and misinformation. Which means maliciously minded and unintentional fakes. Or, to put it another way, a photoshopped fake photo or a genuine image spread in the wrong context.

The approach it’s taking to detecting disinformation relies not on algorithms parsing news content to try to identify malicious nonsense but instead looks at how such stuff spreads on social networks — and also therefore who is spreading it.

There are characteristic patterns to how ‘fake news’ spreads vs the genuine article, says Fabula co-founder and chief scientist, Michael Bronstein.

“We look at the way that the news spreads on the social network. And there is — I would say — a mounting amount of evidence that shows that fake news and real news spread differently,” he tells TechCrunch, pointing to a recent major study by MIT academics which found ‘fake news’ spreads differently vs bona fide content on Twitter.

“The essence of geometric deep learning is it can work with network-structured data. So here we can incorporate heterogenous data such as user characteristics; the social network interactions between users; the spread of the news itself; so many features that otherwise would be impossible to deal with under machine learning techniques,” he continues.

Bronstein, who is also a professor at Imperial College London, with a chair in machine learning and pattern recognition, likens the phenomenon Fabula’s machine learning classifier has learnt to spot to the way infectious disease spreads through a population.

“This is of course a very simplified model of how a disease spreads on the network. In this case network models relations or interactions between people. So in a sense you can think of news in this way,” he suggests. “There is evidence of polarization, there is evidence of confirmation bias. So, basically, there are what is called echo chambers that are formed in a social network that favor these behaviours.”

“We didn’t really go into — let’s say — the sociological or the psychological factors that probably explain why this happens. But there is some research that shows that fake news is akin to epidemics.”

The tl;dr of the MIT study, which examined a decade’s worth of tweets, was that not only does the truth spread slower but also that human beings themselves are implicated in accelerating disinformation. (So, yes, actual human beings are the problem.) Ergo, it’s not all bots doing all the heavy lifting of amplifying junk online.

The silver lining of what appears to be an unfortunate quirk of human nature is that a penchant for spreading nonsense may ultimately help give the stuff away — making a scalable AI-based tool for detecting ‘BS’ potentially not such a crazy pipe-dream.

Although, to be clear, Fabula’s AI remains in development at this stage, having been tested internally on Twitter data sub-sets at this stage. And the claims it’s making for its prototype model remain to be commercially tested with customers in the wild using the tech across different social platforms.

It’s hoping to get there this year, though, and intends to offer an API for platforms and publishers towards the end of this year. The AI classifier is intended to run in near real-time on a social network or other content platform, identifying BS.

Fabula envisages its own role, as the company behind the tech, as that of an open, decentralised “truth-risk scoring platform” — akin to a credit referencing agency just related to content, not cash.

Scoring comes into it because the AI generates a score for classifying content based on how confident it is it’s looking at a piece of fake vs true news.

A visualisation of a fake vs real news distribution pattern; users who predominantly share fake news are coloured red and users who don’t share fake news at all are coloured blue — which Fabula says shows the clear separation into distinct groups, and “the immediately recognisable difference in spread pattern of dissemination”.

In its own tests Fabula says its algorithms were able to identify 93 percent of “fake news” within hours of dissemination — which Bronstein claims is “significantly higher” than any other published method for detecting ‘fake news’. (Their accuracy figure uses a standard aggregate measurement of machine learning classification model performance, called ROC AUC.)

The dataset the team used to train their model is a subset of Twitter’s network — comprised of around 250,000 users and containing around 2.5 million “edges” (aka social connections).

For their training dataset Fabula relied on true/fake labels attached to news stories by third party fact checking NGOs, including Snopes and PolitiFact. And, overall, pulling together the dataset was a process of “many months”, according to Bronstein, He also says that around a thousand different stories were used to train the model, adding that the team is confident the approach works on small social networks, as well as Facebook-sized mega-nets.

Asked whether he’s sure the model hasn’t been trained to identified patterns caused by bot-based junk news spreaders, he says the training dataset included some registered (and thus verified ‘true’) users.

“There is multiple research that shows that bots didn’t play a significant amount [of a role in spreading fake news] because the amount of it was just a few percent. And bots can be quite easily detected,” he also suggests, adding: “Usually it’s based on some connectivity analysis or content analysis. With our methods we can also detect bots easily.”

To further check the model, the team tested its performance over time by training it on historical data and then using a different split of test data.

“While we see some drop in performance it is not dramatic. So the model ages well, basically. Up to something like a year the model can still be applied without any re-training,” he notes, while also saying that, when applied in practice, the model would be continually updated as it keeps digesting (ingesting?) new stories and social media content.

Somewhat terrifyingly, the model could also be used to predict virality, according to Bronstein — raising the dystopian prospect of the API being used for the opposite purpose to that which it’s intended: i.e. maliciously, by fake news purveyors, to further amp up their (anti)social spread.

“Potentially putting it into evil hands it might do harm,” Bronstein concedes. Though he takes a philosophical view on the hyper-powerful double-edged sword of AI technology, arguing such technologies will create an imperative for a rethinking of the news ecosystem by all stakeholders, as well as encouraging emphasis on user education and teaching critical thinking.

Let’s certainly hope so. And, on the educational front, Fabula is hoping its technology can play an important role — by spotlighting network-based cause and effect.

“People now like or retweet or basically spread information without thinking too much or the potential harm or damage they’re doing to everyone,” says Bronstein, pointing again to the infectious diseases analogy. “It’s like not vaccinating yourself or your children. If you think a little bit about what you’re spreading on a social network you might prevent an epidemic.”

So, tl;dr, think before you RT.

Returning to the accuracy rate of Fabula’s model, while ~93 per cent might sound pretty impressive, if it were applied to content on a massive social network like Facebook — which has some 2.3BN+ users, uploading what could be trillions of pieces of content daily — even a seven percent failure rate would still make for an awful lot of fakes slipping undetected through the AI’s net.

But Bronstein says the technology does not have to be used as a standalone moderation system. Rather he suggests it could be used in conjunction with other approaches such as content analysis, and thus function as another string on a wider ‘BS detector’s bow.

It could also, he suggests, further aid human content reviewers — to point them to potentially problematic content more quickly.

Depending on how the technology gets used he says it could do away with the need for independent third party fact-checking organizations altogether because the deep learning system can be adapted to different use cases.

Example use-cases he mentions include an entirely automated filter (i.e. with no human reviewer in the loop); or to power a content credibility ranking system that can down-weight dubious stories or even block them entirely; or for intermediate content screening to flag potential fake news for human attention.

Each of those scenarios would likely entail a different truth-risk confidence score. Though most — if not all — would still require some human back-up. If only to manage overarching ethical and legal considerations related to largely automated decisions. (Europe’s GDPR framework has some requirements on that front, for example.)

Facebook’s grave failures around moderating hate speech in Myanmar — which led to its own platform becoming a megaphone for terrible ethnical violence — were very clearly exacerbated by the fact it did not have enough reviewers who were able to understand (the many) local languages and dialects spoken in the country.

So if Fabula’s language-agnostic propagation and user focused approach proves to be as culturally universal as its makers hope, it might be able to raise flags faster than human brains which lack the necessary language skills and local knowledge to intelligently parse context.

“Of course we can incorporate content features but we don’t have to — we don’t want to,” says Bronstein. “The method can be made language independent. So it doesn’t matter whether the news are written in French, in English, in Italian. It is based on the way the news propagates on the network.”

Although he also concedes: “We have not done any geographic, localized studies.”

“Most of the news that we take are from PolitiFact so they somehow regard mainly the American political life but the Twitter users are global. So not all of them, for example, tweet in English. So we don’t yet take into account tweet content itself or their comments in the tweet — we are looking at the propagation features and the user features,” he continues.

“These will be obviously next steps but we hypothesis that it’s less language dependent. It might be somehow geographically varied. But these will be already second order details that might make the model more accurate. But, overall, currently we are not using any location-specific or geographic targeting for the model.

“But it will be an interesting thing to explore. So this is one of the things we’ll be looking into in the future.”

Fabula’s approach being tied to the spread (and the spreaders) of fake news certainly means there’s a raft of associated ethical considerations that any platform making use of its technology would need to be hyper sensitive to.

For instance, if platforms could suddenly identify and label a sub-set of users as ‘junk spreaders’ the next obvious question is how will they treat such people?

Would they penalize them with limits — or even a total block — on their power to socially share on the platform? And would that be ethical or fair given that not every sharer of fake news is maliciously intending to spread lies?

What if it turns out there’s a link between — let’s say — a lack of education and propensity to spread disinformation? As there can be a link between poverty and education… What then? Aren’t your savvy algorithmic content downweights risking exacerbating existing unfair societal divisions?

Bronstein agrees there are major ethical questions ahead when it comes to how a ‘fake news’ classifier gets used.

“Imagine that we find a strong correlation between the political affiliation of a user and this ‘credibility’ score. So for example we can tell with hyper-ability that if someone is a Trump supporter then he or she will be mainly spreading fake news. Of course such an algorithm would provide great accuracy but at least ethically it might be wrong,” he says when we ask about ethics.

He confirms Fabula is not using any kind of political affiliation information in its model at this point — but it’s all too easy to imagine this sort of classifier being used to surface (and even exploit) such links.

“What is very important in these problems is not only to be right — so it’s great of course that we’re able to quantify fake news with this accuracy of ~90 percent — but it must also be for the right reasons,” he adds.

The London-based startup was founded in April last year, though the academic research underpinning the algorithms has been in train for the past four years, according to Bronstein.

The patent for their method was filed in early 2016 and granted last July.

They’ve been funded by $500,000 in angel funding and about another $500,000 in total of European Research Council grants plus academic grants from tech giants Amazon, Google and Facebook, awarded via open research competition awards.

(Bronstein confirms the three companies have no active involvement in the business. Though doubtless Fabula is hoping to turn them into customers for its API down the line. But he says he can’t discuss any potential discussions it might be having with the platforms about using its tech.)

Focusing on spotting patterns in how content spreads as a detection mechanism does have one major and obvious drawback — in that it only works after the fact of (some) fake content spread. So this approach could never entirely stop disinformation in its tracks.

Though Fabula claims detection is possible within a relatively short time frame — of between two and 20 hours after content has been seeded onto a network.

“What we show is that this spread can be very short,” he says. “We looked at up to 24 hours and we’ve seen that just in a few hours… we can already make an accurate prediction. Basically it increases and slowly saturates. Let’s say after four or five hours we’re already about 90 per cent.”

“We never worked with anything that was lower than hours but we could look,” he continues. “It really depends on the news. Some news does not spread that fast. Even the most groundbreaking news do not spread extremely fast. If you look at the percentage of the spread of the news in the first hours you get maybe just a small fraction. The spreading is usually triggered by some important nodes in the social network. Users with many followers, tweeting or retweeting. So there are some key bottlenecks in the network that make something viral or not.”

A network-based approach to content moderation could also serve to further enhance the power and dominance of already hugely powerful content platforms — by making the networks themselves core to social media regulation, i.e. if pattern-spotting algorithms rely on key network components (such as graph structure) to function.

So you can certainly see why — even above a pressing business need — tech giants are at least interested in backing the academic research. Especially with politicians increasingly calling for online content platforms to be regulated like publishers.

At the same time, there are — what look like — some big potential positives to analyzing spread, rather than content, for content moderation purposes.

As noted above, the approach doesn’t require training the algorithms on different languages and (seemingly) cultural contexts — setting it apart from content-based disinformation detection systems. So if it proves as robust as claimed it should be more scalable.

Though, as Bronstein notes, the team have mostly used U.S. political news for training their initial classifier. So some cultural variations in how people spread and react to nonsense online at least remains a possibility.

A more certain challenge is “interpretability” — aka explaining what underlies the patterns the deep learning technology has identified via the spread of fake news.

While algorithmic accountability is very often a challenge for AI technologies, Bronstein admits it’s “more complicated” for geometric deep learning.

“We can potentially identify some features that are the most characteristic of fake vs true news,” he suggests when asked whether some sort of ‘formula’ of fake news can be traced via the data, noting that while they haven’t yet tried to do this they did observe “some polarization”.

“There are basically two communities in the social network that communicate mainly within the community and rarely across the communities,” he says. “Basically it is less likely that somebody who tweets a fake story will be retweeted by somebody who mostly tweets real stories. There is a manifestation of this polarization. It might be related to these theories of echo chambers and various biases that exist. Again we didn’t dive into trying to explain it from a sociological point of view — but we observed it.”

So while, in recent years, there have been some academic efforts to debunk the notion that social media users are stuck inside filter bubble bouncing their own opinions back at them, Fabula’s analysis of the landscape of social media opinions suggests they do exist — albeit, just not encasing every Internet user.

Bronstein says the next steps for the startup is to scale its prototype to be able to deal with multiple requests so it can get the API to market in 2019 — and start charging publishers for a truth-risk/reliability score for each piece of content they host.

“We’ll probably be providing some restricted access maybe with some commercial partners to test the API but eventually we would like to make it useable by multiple people from different businesses,” says requests. “Potentially also private users — journalists or social media platforms or advertisers. Basically we want to be… a clearing house for news.”


Source: The Tech Crunch

Read More

Coinbase acquihires San Francisco startup Blockspring

Posted by on Jan 17, 2019 in Andreessen Horowitz, AOL, api, author, ceo, coinbase, CrunchBase, cryptocurrencies, cryptocurrency, disclosure, funding, Fundings & Exits, Information technology, Keystone Capital, San Francisco, TC, TechCrunch, Technology, world wide web, Y Combinator | 0 comments

Coinbase is continuing its push to suck up talent after the $8 billion-valued crypto business snapped up Blockspring, a San Francisco-based startup that enables developers to collect and process data from APIs.

The undisclosed deal was announced by Blockspring on its blog, and confirmed to TechCrunch by a Coinbase representative. Coinbase declined to comment further.

Blockspring started out as a serverless data business, but it pivoted into a service that lets companies use API data. That includes purposes such as building list and repositories for recruitment, marketing sales, reporting and more. Pricing starts from $29 per month and Blockspring claims to work with “thousands” of companies.

That startup graduated Y Combinator and, according to Crunchbase, it had raised $3.5 million from investors that include SV Angel and A16z, both of which are Coinbase investors. Those common investors are likely a key reason for the deal, which appears to be a talent acquisition. The Blockspring team will join Coinbase, but it will continue to offer its existing products “for current and new customers as they always have.”

“Joining Coinbase was a no-brainer for a number reasons including its commitment to establishing an open financial system and the strength of its engineering team, led by Tim Wagner (formerly of AWS Lambda). Making the technical simple and accessible is what we’ve always been about at Blockspring. And now we’ll get to push these goals forward along with the talented folks at Coinbase to make something greater than we could on our own,” wrote CEO Paul Katsen.

Coinbase raised $300 million last October to take it to $525 million raised to date from investors. While it may not be a huge one, the Blockspring deal looks to be its eleventh acquisition, according to data from Crunchbase. Most of those have been talent grabs, but its more substantial pieces of M&A have included the $120 million-plus deal for Earn.com, which installed Balaji Srinivasan as the company’s first CTO, the acquisition of highly-rated blockchain browser Cipher, and the purchase of securities dealer Keystone Capital, which boosted its move into security tokens.

In addition to buying up companies, Coinbase also makes investments via its early-stage focused Coinbase Ventures fund.

Disclosure: The author owns a small amount of cryptocurrency. Enough to gain an understanding, not enough to change a life.


Source: The Tech Crunch

Read More

Opera brings a flurry of crypto features to its Android mobile browser

Posted by on Dec 13, 2018 in android, api, Apps, author, Bitcoin, blockchain, blockchains, coinbase, computing, cryptocurrencies, cryptocurrency, cryptokitties, decentralization, ethereum, joseph lubin, note, Software, Technology | 0 comments

Crypto markets may be down down down, but that isn’t stopping Opera’s crypto features — first released in beta in July — from rolling out to all users of its core mobile browser today as the company bids to capture the ‘decentralized internet’ flag early on.

Opera — the world’s fifth most-used browser, according to Statcounter — released the new Opera Browser for Android that includes a built-in crypto wallet for receiving and sending Bitcoin and other tokens, while it also allows for crypto-based commerce where supported. So on e-commerce sites that accept payment via Coinbase Commerce, or other payment providers, Opera users can buy using a password or even their fingerprint.

Those are the headline features that’ll get the most use in the here and now, but Opera is also talking up its support for “Web 3.0” — the so-called decentralized internet of the future based on blockchain technology.

For that, Opera has integrated the Ethereum web3 API which will allow users of the browser to access decentralized apps (dapps) based on Ethereum. There’s also token support for Cryptokitties, the once-hot collectible game that seemingly every single decentralized internet product works with in one way or another.

But, to be quite honest, there really isn’t much to see or use on Web 3.0 right now, the big bet is that there will be in the future.

Ethereum, like other cryptocurrencies, in a funk right now thanks to the bearish crypto market, but the popular refrain from developers is that low season is a good time to build. Well, Opera has just shipped the means to access Ethereum dapps, will the community respond and give people a reason to care?

Pessimism aside, this launch is notable because it has the potential to get blockchain-based tech into the daily habits of “millions” of people, Charles Hamel — Opera’s product lead for crypto — told TechCrunch over email.

While Opera can’t match the user base of Apple’s Safari or Google Chrome — both of which have the advantage of bundling a browser with a mobile OS — Opera does have a very loyal following, which makes this release one of the most impactful blockchain launches to date.

Note: The author owns a small amount of cryptocurrency. Enough to gain an understanding, not enough to change a life.


Source: The Tech Crunch

Read More

Putting the band back together, ExactTarget execs reunite to launch MetaCX

Posted by on Dec 6, 2018 in alpha, api, business software, chief technology officer, cloud applications, cloud computing, computing, customer relationship management, exacttarget, indianapolis, Kobie Fuller, Los Angeles, Marketing, pilot, president, Salesforce Marketing Cloud, salesforce.com, scott dorsey, software as a service, TC, upfront ventures | 0 comments

Scott McCorkle has spent most of his professional career thinking about business to business software and how to improve it for a company’s customers.

The former President of ExactTarget and later chief executive of Salesforce Marketing Cloud has made billions of dollars building products to help support customer service and now he’s back at it again with his latest venture MetaCX.

Alongside Jake Miller, the former chief engineering lead at Salesforce Marketing Cloud and chief technology officer at ExactTarget, and David Duke, the chief customer officer and another ExactTarget alumnus, McCorkle has raised $14 million to build a white-labeled service that offers a toolkit for monitoring, managing and supporting customers as they use new software tools.

If customers are doing the things i want them to be doing through my product. What is it that they want to achieve and why did they buy my product.

“MetaCX sits above any digital product,” McCorkle says. And its software monitors and manages the full spectrum of the customer relationship with that product. “It is API embeddable and we have a full user experience layer.”

For the company’s customers, MetaCX provides a dashboard that includes outcomes, the collaboration, metrics tracked as part of the relationship and all the metrics around that are part of that engagement layer,” says McCorkle.

The first offerings will be launching in the beginning of 2019, but the company has dozens of customers already using its pilot, McCorkle said.

The Indianapolis -based company is one of the latest spinouts from High Alpha Studio, an accelerator and venture capital studio formed by Scott Dorsey, the former chief executive officer of ExactTarget. As one of a crop of venture investment firms and studios cropping up in the Midwest, High Alpha is something of a bellwether for the viability of the venture model in emerging ecosystems. And, from that respect, the success of the MetaCX round speaks volumes. Especially since the round was led by the Los Angeles-based venture firm Upfront Ventures.

“Our founding team includes world-class engineers, designers and architects who have been building billion-dollar SaaS products for two decades,” said McCorkle, in a statement. “We understand that enterprises often struggle to achieve the business outcomes they expect from SaaS, and the renewal process for SaaS suppliers is often an ambiguous guessing game. Our industry is shifting from a subscription economy to a performance economy, where suppliers and buyers of digital products need to transparently collaborate to achieve outcomes.”

As a result of the investment, Upfront partner Kobie Fuller will be taking a seat on the MetaCX board of directors alongside McCorkle and Dorsey.

“The MetaCX team is building a truly disruptive platform that will inject data-driven transparency, commitment and accountability against promised outcomes between SaaS buyers and vendors,” said Fuller, in a statement. “Having been on the journey with much of this team while shaping the martech industry with ExactTarget, I’m incredibly excited to partner again in building another category-defining business with Scott and his team in Indianapolis.”

 


Source: The Tech Crunch

Read More

Facebook is still facing ‘intermittent’ outages for advertisers ahead of Black Friday and Cyber Monday

Posted by on Nov 21, 2018 in ad network, adwords, api, digital marketing, Facebook, Facebook ad network, Marketing, Online Advertising, spokesperson, TC, world wide web | 0 comments

One day after experiencing a massive outage across its ad network, Facebook, one of the most important online advertising platforms, is still seeing “intermittent” issues for its ad products at one of the most critical times of the year for advertisers.

According to a spokesperson for the company, while most systems are restored there are still intermittent issues that could affect advertisers.

For most of the day yesterday, advertisers were unable to create and edit campaigns through Ads Manager or the Ads API tools.

The company said that existing ads were delivered, but advertisers could not set new campaigns or make any changes to existing campaigns, according to several users of the network.

Reporting has been restored for all interfaces, according to the company, but conversion data may be delayed throughout the day for the Americas and in the evening for other regions.

The company declined to comment on how many campaigns were affected by the outage or on whether it intends to compensate or make up for the outage with advertisers on the platform.

Some advertisers are still experiencing outages and are not happy about it.

This is a bad look for a company that is already fighting fires on any number of other fronts. But unlike the problems with bullying, hate speech, and disinformation that don’t impact the ways Facebook makes money, selling ads is actually how Facebook makes money.

In the busiest shopping season of the year (and therefore one of the busiest advertising seasons of the year) for Facebook to have no response and for some developers to still be facing intermittent outages on the platform is a bad sign.


Source: The Tech Crunch

Read More