Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

White House refuses to endorse the ‘Christchurch Call’ to block extremist content online

Posted by on May 15, 2019 in Australia, California, Canada, censorship, Facebook, France, freedom of speech, Google, hate crime, hate speech, New Zealand, Social Media, Software, TC, Terrorism, Twitter, United Kingdom, United States, White House, world wide web | 0 comments

The United States will not join other nations in endorsing the “Christchurch Call” — a global statement that commits governments and private companies to actions that would curb the distribution of violent and extremist content online.

“While the United States is not currently in a position to join the endorsement, we continue to support the overall goals reflected in the Call. We will continue to engage governments, industry, and civil society to counter terrorist content on the Internet,” the statement from the White House reads.

The “Christchurch Call” is a non-binding statement drafted by foreign ministers from New Zealand and France meant to push internet platforms to take stronger measures against the distribution of violent and extremist content. The initiative originated as an attempt to respond to the March killings of 51 Muslim worshippers in Christchruch and the subsequent spread of the video recording of the massacre and statements from the killer online.

By signing the pledge, companies agree to improve their moderation processes and share more information about the work they’re doing to prevent terrorist content from going viral. Meanwhile, government signatories are agreeing to provide more guidance through legislation that would ban toxic content from social networks.

Already, Twitter, Microsoft, Facebook and Alphabet — the parent company of Google — have signed on to the pledge, along with the governments of France, Australia, Canada and the United Kingdom.

The “Christchurch Call” is consistent with other steps that government agencies are taking to address how to manage the ways in which technology is tearing at the social fabric. Members of the Group of 7 are also meeting today to discuss broader regulatory measures designed to combat toxic combat, protect privacy and ensure better oversight of technology companies.

For its part, the White House seems more concerned about the potential risks to free speech that could stem from any actions taken to staunch the flow of extremist and violent content on technology platforms.

“We continue to be proactive in our efforts to counter terrorist content online while also continuing to respect freedom of expression and freedom of the press,” the statement reads.”Further, we maintain that the best tool to defeat terrorist speech is productive speech, and thus we emphasize the importance of promoting credible, alternative narratives as the primary means by which we can defeat terrorist messaging.”

Signatories are already taking steps to make it harder for graphic violence or hate speech to proliferate on their platforms.

Last night, Facebook introduced a one-strike policy that would ban users who violate its live-streaming policies after one infraction.

The Christchurch killings are only the latest example of how white supremacist hate groups and terrorist organizations have used online propaganda to create an epidemic of violence at a global scale. Indeed, the alleged shooter in last month’s attack on a synagogue in Poway, Calif., referenced the writings of the Christchurch killer in an explanation for his attack, which he published online.

Critics are already taking shots at the White House for its inability to add the U.S. to a group of nations making a non-binding commitment to ensure that the global community can #BeBest online.


Source: The Tech Crunch

Read More

Microsoft open-sources a crucial algorithm behind its Bing Search services

Posted by on May 15, 2019 in Artificial Intelligence, Bing, Cloud, computing, Developer, Microsoft, open source software, search results, Software, windows phone, world wide web | 0 comments

Microsoft today announced that it has open-sourced a key piece of what makes its Bing search services able to quickly return search results to its users. By making this technology open, the company hopes that developers will be able to build similar experiences for their users in other domains where users search through vast data troves, including in retail, though in this age of abundant data, chances are developers will find plenty of other enterprise and consumer use cases, too.

The piece of software the company open-sourced today is a library Microsoft developed to make better use of all the data it collected and AI models it built for Bing .

“Only a few years ago, web search was simple. Users typed a few words and waded through pages of results,” the company notes in today’s announcement. “Today, those same users may instead snap a picture on a phone and drop it into a search box or use an intelligent assistant to ask a question without physically touching a device at all. They may also type a question and expect an actual reply, not a list of pages with likely answers.”

With the Space Partition Tree and Graph (SPTAG) algorithm that is at the core of the open-sourced Python library, Microsoft is able to search through billions of pieces of information in milliseconds.

Vector search itself isn’t a new idea, of course. What Microsoft has done, though, is apply this concept to working with deep learning models. First, the team takes a pre-trained model and encodes that data into vectors, where every vector represents a word or pixel. Using the new SPTAG library, it then generates a vector index. As queries come in, the deep learning model translates that text or image into a vector and the library finds the most related vectors in that index.

“With Bing search, the vectorizing effort has extended to over 150 billion pieces of data indexed by the search engine to bring improvement over traditional keyword matching,” Microsoft says. “These include single words, characters, web page snippets, full queries and other media. Once a user searches, Bing can scan the indexed vectors and deliver the best match.”

The library is now available under the MIT license and provides all of the tools to build and search these distributed vector indexes. You can find more details about how to get started with using this library — as well as application samples — here.


Source: The Tech Crunch

Read More

Singapore’s Grain, a profitable food delivery startup, pulls in $10M for expansion

Posted by on May 10, 2019 in Asia, bangkok, Cento Ventures, ceo, Deliveroo, Food, food delivery, Foodpanda, funding, Fundings & Exits, grain, Honestbee, Impossible foods, munchery, online food ordering, openspace ventures, Singapore, Southeast Asia, Spotify, Startup company, TC, Thailand, transport, Travis Kalanick, Uber, United States, websites, world wide web | 0 comments

Cloud kitchens are the big thing in food delivery, with ex-Uber CEO Travis Kalanick’s new business one contender in that space, with Asia, and particularly Southeast Asia, a major focus. Despite the newcomers, a more established startup from Singapore has raised a large bowl of cash to go after regional expansion.

Founded in 2014, Grain specializes in clean food while it takes a different approach to Kalanick’s CloudKitchens or food delivery services like Deliveroo, FoodPanda or GrabFood.

It adopted a cloud kitchen model — utilizing unwanted real estate as kitchens, with delivery services for output — but used it for its own operations. So while CloudKitchens and others rent their space to F&B companies as a cheaper way to make food for their on-demand delivery customers, Grain works with its own chefs, menu and delivery team. A so-called ‘full stack’ model if you can stand the cliched tech phrase.

Finally, Grain is also profitable. The new round has it shooting for growth — more on that below — but the startup was profitable last year, CEO and co-founder Yi Sung Yong told TechCrunch.

Now it is reaping the rewards of a model that keeps it in control of its product, unlike others that are complicated by a chain that includes the restaurant and a delivery person.

We previously wrote about Grain when it raised a $1.7 million Series A back in 2016 and today it announced a $10 million Series B which is led by Thailand’s Singha Ventures, the VC arm of the beer brand. A bevy of other investors took part, including Genesis Alternative Ventures, Sass Corp, K2 Global — run by serial investor Ozi Amanat who has backed Impossible Foods, Spotify and Uber among others — FoodXervices and Majuven. Existing investors Openspace Ventures, Raging Bull — from Thai Express founder Ivan Lee — and Cento Ventures participated.

The round includes venture debt, as well as equity, and it is worth noting that the family office of the owners of The Coffee Bean & Tea Leaf — Sassoon Investment Corporation — was involved.

Grain covers individual food as well as buffets in Singapore

Three years is a long gap between the two deals — Openspace and Cento have even rebranded during the intervening period — and the ride has been an eventful one. During those years, Sung said the business had come close to running out of capital before it doubled down on the fundamentals before the precarious runway capital ran out.

In fact, he said, the company — which now has over 100 staff — was fully prepared to self-sustain.

“We didn’t think of raising a Series B,” he explained in an interview. “Instead, we focused on the business and getting profitable… we thought that we can’t depend entirely on investors.”

And, ladies and gentleman, the irony of that is that VCs very much like a business that can self-sustain — it shows a model is proven — and investing in a startup that doesn’t need capital can be attractive.

Ultimately, though, profitability is seen as sexy today — particularly in the meal space where countless U.S. startups has shuttered including Munchery and Sprig — but the focus meant that Grain had to shelve its expansion plans. It then went through soul-searching times in 2017 when a spoilt curry saw 20 customers get food poisoning.

Sung declined to comment directly on that incident, but he said that company today has developed the “infrastructure” to scale its business across the board, and that very much includes quality control.

Grain co-founder and CEO Yi Sung Yong [Image via LinkedIn]

Grain currently delivers “thousands” of meals per day in Singapore, its sole market, with eight-figures in sales per year, he said. Last year, growth was 200 percent, Sung continued, and now is the time to look overseas. With Singha, the Grain CEO said the company has “everything we need to launch in Bangkok.”

Thailand — which Malaysia-based rival Dahamakan picked for its first expansion — is the only new launch on the table, but Sung said that could change.

“If things move faster, we’ll expand to more cities, maybe one per year,” he said. “But we need to get our brand, our food and our service right first.”

One part of that may be securing better deals for raw ingredients and food from suppliers. Grain is expanding its ‘hub’ kitchens — outposts placed strategically around town to serve customers faster — and growing its fleet of trucks, which are retrofitted with warmers and chillers for deliveries to customers.

Grain’s journey is proof that startups in the region will go through trials and tribulations, but being able to bolt down the fundamentals and reduce burn rate is crucial in the event that things go awry. Just look to grocery startup Honestbee, also based in Singapore, for evidence of what happens when costs are allowed to pile up.


Source: The Tech Crunch

Read More

Google starts rolling out better AMP URLs

Posted by on Apr 17, 2019 in Amp+, chrome, digital media, Google, google search, HTML, Mobile, mobile web, Online Advertising, TC, world wide web | 0 comments

Publishers don’t always love Google’s AMP pages, but readers surely appreciate their speed, and while publishers are loath to give Google more power, virtually every major site now supports this format. One AMP quirk that publisher’s definitely never liked is about to go away, though. Starting today, when you use Google Search and click on an AMP link, the browser will display the publisher’s real URLs instead of an “http//google.com/amp” link.

This move has been in the making for well over a year. Last January, the company announced that it was embarking on a multi-month effort to load AMP pages from the Google AMP cache without displaying the Google URL.

At the core of this effort was the new Web Packaging standard, which uses signed exchanges with digital signatures to let the browser trust a document as if it belongs to a publisher’s origin. By default, a browser should reject scripts in a web page that try to access data that doesn’t come from the same origin. Publishers will have to do a bit of extra work, and publish both signed and un-signed versions of their stories.

 

Quite a few publishers already do this, given that Google started alerting publishers of this change in November 2018. For now, though, only Chrome supports the core features behind this service, but other browsers will likely add support soon, too.

For publishers, this is a pretty big deal, given that their domain name is a core part of their brand identity. Using their own URL also makes it easier to get analytics, and the standard grey bar that sits on top of AMP pages and shows the site you are on now isn’t necessary anymore because the name will be in the URL bar.

To launch this new feature, Google also partnered with Cloudflare, which launched its AMP Real URL feature today. It’ll take a bit before it will roll out to all users, who can then enable it with a single click. With this, the company will automatically sign every AMP page it sends to the Google AMP cache. For the time being, that makes Cloudflare the only CDN that supports this feature, though others will surely follow.

“AMP has been a great solution to improve the performance of the internet and we were eager to work with the AMP Project to help eliminate one of AMP’s biggest issues — that it wasn’t served from a publisher’s perspective,” said Matthew Prince, co-founder and CEO of Cloudflare. “As the only provider currently enabling this new solution, our global scale will allow publishers everywhere to benefit from a faster and more brand-aware mobile experience for their content.”

 


Source: The Tech Crunch

Read More

Get ready for a new era of personalized entertainment

Posted by on Apr 13, 2019 in Amazon, Artificial Intelligence, Column, computing, Content, Facebook, machine learning, Marketing, Multimedia, personalization, smart devices, Spotify, Streaming Media, streaming services, Twitter, virtual reality, world wide web | 0 comments

New machine learning technologies, user interfaces and automated content creation techniques are going to expand the personalization of storytelling beyond algorithmically generated news feeds and content recommendation.

The next wave will be software-generated narratives that are tailored to the tastes and sentiments of a consumer.

Concretely, it means that your digital footprint, personal preferences and context unlock alternative features in the content itself, be it a news article, live video or a hit series on your streaming service.

The title contains different experiences for different people.

From smart recommendations to smarter content

When you use Youtube, Facebook, Google, Amazon, Twitter, Netflix or Spotify, algorithms select what gets recommended to you. The current mainstream services and their user interfaces and recommendation engines have been optimized to serve you content you might be interested in.

Your data, other people’s data, content-related data and machine learning methods are used to match people and content, thus improving the relevance of content recommendations and efficiency of content distribution.

However, so far the content experience itself has mostly been similar to everyone. If the same news article, live video or TV series episode gets recommended to you and me, we both read and watch the same thing, experiencing the same content.

That’s about to change. Soon we’ll be seeing new forms of smart content, in which user interface, machine learning technologies and content itself are combined in a seamless manner to create a personalized content experience.

What is smart content?

Smart content means that content experience itself is affected by who is seeing, watching, reading or listening to content. The content itself changes based on who you are.

We are already seeing the first forerunners in this space. TikTok’s whole content experience is driven by very short videos, audiovisual content sequences if you will, ordered and woven together by algorithms. Every user sees a different, personalized, “whole” based on her viewing history and user profile.

At the same time, Netflix has recently started testing new forms of interactive content (TV series episodes, e.g. Black Mirror: Bandersnatch) in which user’s own choices affect directly the content experience, including dialogue and storyline. And more is on its way. With Love, Death & Robots series, Netflix is experimenting with episode order within a series, serving the episodes in different order for different users.

Some earlier predecessors of interactive audio-visual content include sports event streaming, in which the user can decide which particular stream she follows and how she interacts with the live content, for example rewinding the stream and spotting the key moments based on her own interest.

Simultaneously, we’re seeing how machine learning technologies can be used to create photo-like images of imaginary people, creatures and places. Current systems can recreate and alter entire videos, for example by changing the style, scenery, lighting, environment or central character’s face. Additionally, AI solutions are able to generate music in different genres.

Now, imagine, that TikTok’s individual short videos would be automatically personalized by the effects chosen by an AI system, and thus the whole video would be customized for you. Or that the choices in the Netflix’s interactive content affecting the plot twists, dialogue and even soundtrack, were made automatically by algorithms based on your profile.

Personalized smart content is coming to news as well. Automated systems, using today’s state-of-the-art NLP technologies, can generate long pieces of concise, comprehensible and even inventive textual content at scale. At present, media houses use automated content creation systems, or “robot journalists”, to create news material varying from complete articles to audio-visual clips and visualizations. Through content atomization (breaking content into small modular chunks of information) and machine learning, content production can be increased massively to support smart content creation.

Say that a news article you read or listen to is about a specific political topic that is unfamiliar to you. When comparing the same article with your friend, your version of the story might use different concepts and offer a different angle than your friend’s who’s really deep into politics. A beginner’s smart content news experience would differ from the experience of a topic enthusiast.

Content itself will become a software-like fluid and personalized experience, where your digital footprint and preferences affect not just how the content is recommended and served to you, but what the content actually contains.

Automated storytelling?

How is it possible to create smart content that contains different experiences for different people?

Content needs to be thought and treated as an iterative and configurable process rather than a ready-made static whole that is finished when it has been published in the distribution pipeline.

Importantly, the core building blocks of the content experience change: smart content consists of atomized modular elements that can be modified, updated, remixed, replaced, omitted and activated based on varying rules. In addition, content modules that have been made in the past, can be reused if applicable. Content is designed and developed more like a software.

Currently a significant amount of human effort and computing resources are used to prepare content for machine-powered content distribution and recommendation systems, varying from smart news apps to on-demand streaming services. With smart content, the content creation and its preparation for publication and distribution channels wouldn’t be separate processes. Instead, metadata and other invisible features that describe and define the content are an integral part of the content creation process from the very beginning.

Turning Donald Glover into Jay Gatsby

With smart content, the narrative or image itself becomes an integral part of an iterative feedback loop, in which the user’s actions, emotions and other signals as well as the visible and invisible features of the content itself affect the whole content consumption cycle from the content creation and recommendation to the content experience. With smart content features, a news article or a movie activates different elements of the content for different people.

It’s very likely that smart content for entertainment purposes will have different features and functions than news media content. Moreover, people expect frictionless and effortless content experience and thus smart content experience differs from games. Smart content doesn’t necessarily require direct actions from the user. If the person wants, the content personalization happens proactively and automatically, without explicit user interaction.

Creating smart content requires both human curation and machine intelligence. Humans focus on things that require creativity and deep analysis while AI systems generate, assemble and iterate the content that becomes dynamic and adaptive just like software.

Sustainable smart content

Smart content has different configurations and representations for different users, user interfaces, devices, languages and environments. The same piece of content contains elements that can be accessed through voice user interface or presented in augmented reality applications. Or the whole content expands into a fully immersive virtual reality experience.

In the same way as with the personalized user interfaces and smart devices, smart content can be used for good and bad. It can be used to enlighten and empower, as well as to trick and mislead. Thus it’s critical, that human-centered approach and sustainable values are built in the very core of smart content creation. Personalization needs to be transparent and the user needs to be able to choose if she wants the content to be personalized or not. And of course, not all content will be smart in the same way, if at all.

If used in a sustainable manner, smart content can break filter bubbles and echo chambers as it can be used to make a wide variety of information more accessible for diverse audiences. Through personalization, challenging topics can be presented to people according to their abilities and preferences, regardless of their background or level of education. For example a beginner’s version of vaccination content or digital media literacy article uses gamification elements, and the more experienced user gets directly a thorough fact-packed account of the recent developments and research results.

Smart content is also aligned with the efforts against today’s information operations such as fake news and its different forms such as “deep fakes” (http://www.niemanlab.org/2018/11/how-the-wall-street-journal-is-preparing-its-journalists-to-detect-deepfakes). If the content is like software, a legit software runs on your devices and interfaces without a problem. On the other hand, even the machine-generated realistic-looking but suspicious content, like deep fake, can be detected and filtered out based on its signature and other machine readable qualities.


Smart content is the ultimate combination of user experience design, AI technologies and storytelling.

News media should be among the first to start experimenting with smart content. When the intelligent content starts eating the world, one should be creating ones own intelligent content.

The first players that master the smart content, will be among tomorrow’s reigning digital giants. And that’s one of the main reasons why today’s tech titans are going seriously into the content game. Smart content is coming.


Source: The Tech Crunch

Read More

Vizion.ai launches its managed Elasticsearch service

Posted by on Mar 28, 2019 in Amazon Web Services, api, Artificial Intelligence, Caching, cloud computing, computing, Developer, Elastic, Elasticsearch, Enterprise, ML, TC, world wide web | 0 comments

Setting up Elasticsearch, the open-source system that many companies large and small use to power their distributed search and analytics engines, isn’t the hardest thing. What is very hard, though, is to provision the right amount of resources to run the service, especially when your users’ demand comes in spikes, without overpaying for unused capacity. Vizion.ai’s new Elasticsearch Service does away with all of this by essentially offering Elasticsearch as a service and only charging its customers for the infrastructure they use.

Vizion.ai’s service automatically scales up and down as needed. It’s a managed service and delivered as a SaaS platform that can support deployments on both private and public clouds, with full API compatibility with the standard Elastic stack that typically includes tools like Kibana for visualizing data, Beats for sending data to the service and Logstash for transforming the incoming data and setting up data pipelines. Users can easily create several stacks for testing and development, too, for example.

Vizion.ai GM and VP Geoff Tudor

“When you go into the AWS Elasticsearch service, you’re going to be looking at dozens or hundreds of permutations for trying to build your own cluster,” Vision.ai’s VP and GM Geoff Tudor told me. “Which instance size? How many instances? Do I want geographical redundancy? What’s my networking? What’s my security? And if you choose wrong, then that’s going to impact the overall performance. […] We do balancing dynamically behind that infrastructure layer.” To do this, the service looks at the utilization patterns of a given user and then allocates resources to optimize for the specific use case.

What VVizion.ai hasdone here is take some of the work from its parent company Panzura, a multi-cloud storage service for enterprises that has plenty of patents around data caching, and applied it to this new Elasticsearch service.

There are obviously other companies that offer commercial Elasticsearch platforms already. Tudor acknowledges this, but argues that his company’s platform is different. With other products, he argues, you have to decide on the size of your block storage for your metadata upfront, for example, and you typically want SSDs for better performance, which can quickly get expensive. Thanks to Panzura’s IP, Vizion.ai is able to bring down the cost by caching recent data on SSDs and keeping the rest in cheaper object storage pools.

He also noted that the company is positioning the overall Vizion.ai service, with the Elasticsearch service as one of the earliest components, as a platform for running AI and ML workloads. Support for TensorFlow, PredictionIO (which plays nicely with Elasticsearch) and other tools is also in the works. “We want to make this an easy serverless ML/AI consumption in a multi-cloud fashion, where not only can you leverage the compute, but you can also have your storage of record at a very cost-effective price point.”


Source: The Tech Crunch

Read More

The “splinternet” is already here

Posted by on Mar 13, 2019 in alibaba, Asia, Baidu, belgium, Brussels, censorship, chief executive officer, China, Column, corbis, Dragonfly, Eric Schmidt, eu commission, Facebook, firewall, Getty-Images, Google, great firewall, Information technology, Internet, internet access, Iran, Mark Zuckerberg, net neutrality, North Korea, online freedom, open Internet, photographer, russia, Saudi Arabia, search engines, South Korea, Sundar Pichai, Syria, Tencent, United Kingdom, United Nations, United States, Washington D.C., world wide web | 0 comments

There is no question that the arrival of a fragmented and divided internet is now upon us. The “splinternet,” where cyberspace is controlled and regulated by different countries is no longer just a concept, but now a dangerous reality. With the future of the “World Wide Web” at stake, governments and advocates in support of a free and open internet have an obligation to stem the tide of authoritarian regimes isolating the web to control information and their populations.

Both China and Russia have been rapidly increasing their internet oversight, leading to increased digital authoritarianism. Earlier this month Russia announced a plan to disconnect the entire country from the internet to simulate an all-out cyberwar. And, last month China issued two new censorship rules, identifying 100 new categories of banned content and implementing mandatory reviews of all content posted on short video platforms.

While China and Russia may be two of the biggest internet disruptors, they are by no means the only ones. Cuban, Iranian and even Turkish politicians have begun pushing “information sovereignty,” a euphemism for replacing services provided by western internet companies with their own more limited but easier to control products. And a 2017 study found that numerous countries, including Saudi Arabia, Syria and Yemen have engaged in “substantial politically motivated filtering.”

This digital control has also spread beyond authoritarian regimes. Increasingly, there are more attempts to keep foreign nationals off certain web properties.

For example, digital content available to U.K. citizens via the BBC’s iPlayer is becoming increasingly unavailable to Germans. South Korea filters, censors and blocks news agencies belonging to North Korea. Never have so many governments, authoritarian and democratic, actively blocked internet access to their own nationals.

The consequences of the splinternet and digital authoritarianism stretch far beyond the populations of these individual countries.

Back in 2016, U.S. trade officials accused China’s Great Firewall of creating what foreign internet executives defined as a trade barrier. Through controlling the rules of the internet, the Chinese government has nurtured a trio of domestic internet giants, known as BAT (Baidu, Alibaba and Tencent), who are all in lock step with the government’s ultra-strict regime.

The super-apps that these internet giants produce, such as WeChat, are built for censorship. The result? According to former Google CEO Eric Schmidt, “the Chinese Firewall will lead to two distinct internets. The U.S. will dominate the western internet and China will dominate the internet for all of Asia.”

Surprisingly, U.S. companies are helping to facilitate this splinternet.

Google had spent decades attempting to break into the Chinese market but had difficulty coexisting with the Chinese government’s strict censorship and collection of data, so much so that in March 2010, Google chose to pull its search engines and other services out of China. However now, in 2019, Google has completely changed its tune.

Google has made censorship allowances through an entirely different Chinese internet platform called project Dragonfly . Dragonfly is a censored version of Google’s Western search platform, with the key difference being that it blocks results for sensitive public queries.

Sundar Pichai, chief executive officer of Google Inc., sits before the start of a House Judiciary Committee hearing in Washington, D.C., U.S., on Tuesday, Dec. 11, 2018. Pichai backed privacy legislation and denied the company is politically biased, according to a transcript of testimony he plans to deliver. Photographer: Andrew Harrer/Bloomberg via Getty Images

The Universal Declaration of Human Rights states that “people have the right to seek, receive, and impart information and ideas through any media and regardless of frontiers.”

Drafted in 1948, this declaration reflects the sentiment felt following World War II, when people worked to prevent authoritarian propaganda and censorship from ever taking hold the way it once did. And, while these words were written over 70 years ago, well before the age of the internet, this declaration challenges the very concept of the splinternet and the undemocratic digital boundaries we see developing today.

As the web becomes more splintered and information more controlled across the globe, we risk the deterioration of democratic systems, the corruption of free markets and further cyber misinformation campaigns. We must act now to save a free and open internet from censorship and international maneuvering before history is bound to repeat itself.

BRUSSELS, BELGIUM – MAY 22: An Avaaz activist attends an anti-Facebook demonstration with cardboard cutouts of Facebook chief Mark Zuckerberg, on which is written “Fix Fakebook”, in front of the Berlaymont, the EU Commission headquarter on May 22, 2018 in Brussels, Belgium. Avaaz.org is an international non-governmental cybermilitating organization, founded in 2007. Presenting itself as a “supranational democratic movement,” it says it empowers citizens around the world to mobilize on various international issues, such as human rights, corruption or poverty. (Photo by Thierry Monasse/Corbis via Getty Images)

The Ultimate Solution

Similar to the UDHR drafted in 1948, in 2016, the United Nations declared “online freedom” to be a fundamental human right that must be protected. While not legally binding, the motion passed with consensus, and therefore the UN was provided limited power to endorse an open internet (OI) system. Through selectively applying pressure on governments who are not compliant, the UN can now enforce digital human rights standards.

The first step would be to implement a transparent monitoring system which ensures that the full resources of the internet, and ability to operate on it, are easily accessible to all citizens. Countries such as North Korea, China, Iran and Syria, who block websites and filter email plus social media communication, would be encouraged to improve through the imposition of incentives and consequences.

All countries would be ranked on their achievement of multiple positive factors including open standards, lack of censorship, and low barriers to internet entry. A three tier open internet ranking system would divide all nations into Free, Partly Free or Not Free. The ultimate goal would be to have all countries gradually migrate towards the Free category, allowing all citizens full information across the WWW, equally free and open without constraints.

The second step would be for the UN to align itself much more closely with the largest western internet companies. Together they could jointly assemble detailed reports on each government’s efforts towards censorship creep and government overreach. The global tech companies are keenly aware of which specific countries are applying pressure for censorship and the restriction of digital speech. Together, the UN and global tech firms would prove strong adversaries, protecting the citizens of the world. Every individual in every country deserves to know what is truly happening in the world.

The Free countries with an open internet, zero undue regulation or censorship would have a clear path to tremendous economic prosperity. Countries who remain in the Not Free tier, attempting to impose their self-serving political and social values would find themselves completely isolated, visibly violating digital human rights law.

This is not a hollow threat. A completely closed off splinternet will inevitably lead a country to isolation, low growth rates, and stagnation.


Source: The Tech Crunch

Read More

Venture investors and startup execs say they don’t need Elizabeth Warren to defend them from big tech

Posted by on Mar 8, 2019 in Amazon, AT&T, ben narasin, chief technology officer, coinbase, Companies, economy, elizabeth warren, entrepreneurship, Facebook, Federal Trade Commission, Google, IBM, kara nortman, Los Angeles, Microsoft, new enterprise associates, Private Equity, Social Media, Startup company, TC, Technology, Technology Development, United States, upfront ventures, us government, venky ganesan, Venture Capital, Walmart, world wide web, zappos | 0 comments

Responding to Elizabeth Warren’s call to regulate and break up some of the nation’s largest technology companies, the venture capitalists that invest in technology companies are advising the presidential hopeful to move slowly and not break anything.

Warren’s plan called for regulators to be appointed to oversee the unwinding of several acquisitions that were critical to the development of the core technology that make Alphabet’s Google and the social media giant Facebook so profitable… and Zappos.

Warren also wanted regulation in place that would block companies making over $25 billion that operate as social media or search platforms or marketplaces from owning companies that also sell services on those marketplaces.

As a whole, venture capitalists viewing the policy were underwhelmed.

“As they say on Broadway, ‘you gotta have a gimmick’ and this is clearly Warren’s,” says Ben Narasin, an investor at one of the nation’s largest investment firms,” New Enterprise Associates, which has $18 billion in assets under management and has invested in consumer companies like Jet, an online and mobile retailer that competed with Amazon and was sold to Walmart for $3.3 billion.

“Decades ago, at the peak of Japanese growth as a technology competitor on the global stage, the US government sought to break up IBM . This is not a new model, and it makes no sense,” says Narasin. “We slow down our country, our economy and our ability to innovate when the government becomes excessively aggressive in efforts to break up technology companies, because they see them through a prior-decades lens, when they are operating in a future decade reality. This too shall pass.”

Balaji Sirinivasan, the chief technology officer of Coinbase, took to Twitter to offer his thoughts on the Warren plan. “If big companies like Google, Facebook and Amazon are prevented from acquiring startups, that actually reduces competition,” Sirinivasan writes.

“There are two separate issues here that are being conflated. One issue is do we need regulation on the full platform companies. And the answer is absolutely,” says Venky Ganesan, the managing director of Menlo Ventures. “These platforms have a huge impact on society at large and they have huge influence.”

But while the platforms need to be regulated, Ganesan says, Senator Warren’s approach is an exercise in overreach.

“That plan is like taking a bazooka to a knife fight. It’s overwhelming and it’s not commensurate with the issues,” Ganesan says. “I don’t think at the end of the day venture capital is worrying about competition from these big platform companies. [And] as the proposal is composed it would create more obstacles rather than less.”

Using Warren’s own example of the antitrust cases that were brought against companies like AT&T and Microsoft, is a good model for how to proceed, Ganesan says. “We want to have the technocrats at the FTC figure out the right way to bring balance.”

Kara Nortman, a partner with the Los Angeles-based firm Upfront Ventures, is also concerned about the potential unforeseen consequences of Warren’s proposals.

“The specifics of the policy as presented strike me as having potentially negative consequences for innovation, These companies are funding massive innovation initiatives in our country. They’re creating jobs and taking risks in areas of technology development where we could potentially fall behind other countries and wind up reducing our quality of life,” Nortman says. “We’re not seeing that innovation or initiative come from the government – or that support for encouraging immigration and by extension embracing the talented foreign entrepreneurs that could develop new technologies and businesses.”

Nortman sees the Warren announcement as an attempt to start a dialogue between government regulators and big technology companies.

“My hope is that this is the beginning of a dialogue that is constructive,” Nortman says. “And since Elizabeth Warren is a thoughtful policymaker this is likely the first salvo toward an engagement with the technology community to work collaboratively on issues that we all want to see solved and that some of us are dedicating our career in venture to help solving.”


Source: The Tech Crunch

Read More

Facebook removes hundreds of accounts linked to fake news group in Indonesia

Posted by on Feb 1, 2019 in Asia, computing, digital media, Facebook, fake news, Indonesia, instagram, Myanmar, Philippines, photo sharing, Singapore, Social Media, social network, Software, Southeast Asia, sri lanka, TC, Thailand, United Nations, United States, world wide web | 0 comments

Facebook said today it has removed hundreds of Facebook and Instagram counts with links to an organization that peddled fake news.

The world’s fourth largest country with a population of over 260 million, Indonesia is in election year alongside Southeast Asia neighbors Thailand and the Philippines. Facebook said this week it has set up an ‘election integrity’ team in Singapore, its APAC HQ, as it tries to prevent its social network being misused in the lead-up to voting as happened in the U.S.

This Indonesia bust is the first move announced since that task force was put in place, and it sees 207 Facebook Pages, 800 Facebook accounts, 546 Facebook Groups, and 208 Instagram accounts removed for “engaging in coordinated inauthentic behavior.”

“About 170,000 people followed at least one of these Facebook Pages, and more than 65,000 followed at least one of these Instagram accounts,” Facebook said of the reach of the removed accounts.

The groups and accounts are linked to Saracen Group, a digital media group that saw three of its members arrested by police in 2016 for spreading “incendiary material,’ as Reuters reports.

Facebook isn’t saying too much about the removals other than: “we don’t want our services to be used to manipulate people.”

In January, the social network banned a fake news group in the Philippines in similar circumstances.

Despite the recent action, the U.S. company has struggled to manage the flow of false information that flows across its services in Asia. The most extreme examples come from Myanmar, where the UN has concluded that Facebook played a key role in escalating religious hatred and fueling violence. Facebook has also been criticized for allowing manipulation in Sri Lanka and the Philippines among other places.


Source: The Tech Crunch

Read More

Amazon’s barely-transparent transparency report somehow gets more opaque

Posted by on Jan 31, 2019 in amazon alexa, Apps, computing, e-book, Government, Online Music Stores, Privacy, Publishing, reporter, world wide web | 0 comments

Amazon posted its bi-annual report Thursday detailing the number of government data demands it receives.

The numbers themselves are unremarkable, neither spiking nor falling in the second-half of last year compared to the first-half. The number of subpoenas, search warrants and other court orders totaled 1,736 for the duration, down slightly on the previous report. Amazon still doesn’t break out demands for Echo data, but does with its Amazon Web Services content — a total of 175 requests down from 253 requests.

But noticeably absent compared to earlier reports was how many requests the company received to remove data from its service.

In its first-half report, the retail and cloud giant said in among the other demands it gets that it may receive court orders that might demand Amazon “remove user content or accounts.” Amazon used to report the requests “separately” in its report.

Now it’s gone. Yet where freedom of speech and expression is more important than ever, it’s just not there any more — not even a zero.

We reached out to Amazon to ask why it took out removal requests, but not a peep back on why.

Amazon has long had a love-hate relationship with transparency reports. Known for its notorious secrecy — once telling a reporter, “off the record, no comment” — the company doesn’t like to talk when it doesn’t have to. In the wake of the Edward Snowden disclosures, most companies that weren’t disclosing their government data demands quickly started. Even though Amazon wasn’t directly affected by the surveillance scandal, it held out — because it could — but later buckled, becoming the last of the major tech giants to come out with a transparency report.

Even then, the effort Amazon put in was lackluster.

Unlike most other transparency reports, Amazon’s is limited to just two pages — most of which are dedicated to explaining what it does in response to each kind of demand, from subpoenas to search warrants and court orders. No graphics, no international breakdown and no announcement. It’s almost as if Amazon doesn’t want anyone to notice.

That hasn’t changed in years. Where most other companies have expanded their reports — Apple records account deletions, so does Facebook, and Microsoft, Twitter, Google and a bunch more — Amazon’s report has stayed the same.

And for no good reason except that Amazon just can. Now it’s getting even slimmer.


Source: The Tech Crunch

Read More