Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

White House refuses to endorse the ‘Christchurch Call’ to block extremist content online

Posted by on May 15, 2019 in Australia, California, Canada, censorship, Facebook, France, freedom of speech, Google, hate crime, hate speech, New Zealand, Social Media, Software, TC, Terrorism, Twitter, United Kingdom, United States, White House, world wide web | 0 comments

The United States will not join other nations in endorsing the “Christchurch Call” — a global statement that commits governments and private companies to actions that would curb the distribution of violent and extremist content online.

“While the United States is not currently in a position to join the endorsement, we continue to support the overall goals reflected in the Call. We will continue to engage governments, industry, and civil society to counter terrorist content on the Internet,” the statement from the White House reads.

The “Christchurch Call” is a non-binding statement drafted by foreign ministers from New Zealand and France meant to push internet platforms to take stronger measures against the distribution of violent and extremist content. The initiative originated as an attempt to respond to the March killings of 51 Muslim worshippers in Christchruch and the subsequent spread of the video recording of the massacre and statements from the killer online.

By signing the pledge, companies agree to improve their moderation processes and share more information about the work they’re doing to prevent terrorist content from going viral. Meanwhile, government signatories are agreeing to provide more guidance through legislation that would ban toxic content from social networks.

Already, Twitter, Microsoft, Facebook and Alphabet — the parent company of Google — have signed on to the pledge, along with the governments of France, Australia, Canada and the United Kingdom.

The “Christchurch Call” is consistent with other steps that government agencies are taking to address how to manage the ways in which technology is tearing at the social fabric. Members of the Group of 7 are also meeting today to discuss broader regulatory measures designed to combat toxic combat, protect privacy and ensure better oversight of technology companies.

For its part, the White House seems more concerned about the potential risks to free speech that could stem from any actions taken to staunch the flow of extremist and violent content on technology platforms.

“We continue to be proactive in our efforts to counter terrorist content online while also continuing to respect freedom of expression and freedom of the press,” the statement reads.”Further, we maintain that the best tool to defeat terrorist speech is productive speech, and thus we emphasize the importance of promoting credible, alternative narratives as the primary means by which we can defeat terrorist messaging.”

Signatories are already taking steps to make it harder for graphic violence or hate speech to proliferate on their platforms.

Last night, Facebook introduced a one-strike policy that would ban users who violate its live-streaming policies after one infraction.

The Christchurch killings are only the latest example of how white supremacist hate groups and terrorist organizations have used online propaganda to create an epidemic of violence at a global scale. Indeed, the alleged shooter in last month’s attack on a synagogue in Poway, Calif., referenced the writings of the Christchurch killer in an explanation for his attack, which he published online.

Critics are already taking shots at the White House for its inability to add the U.S. to a group of nations making a non-binding commitment to ensure that the global community can #BeBest online.


Source: The Tech Crunch

Read More

Scalable, low cost technologies needed to repair climate, Cambridge professor suggests

Posted by on May 10, 2019 in cambridge university, carbon capture, carbon neutral, climate change, Europe, Global Warming, Greenhouse Gas Emissions, GreenTech, UK government, United Kingdom | 0 comments

Cambridge University has proposed setting up a research center tasked with coming up with scalable technological fixes for climate change.

The proposed Center for Climate Repair is being co-ordinated by David King, an emeritus professor in physical chemistry at the university and also the UK government’s former chief scientific adviser.

Speaking to the BBC this morning King suggested the scale of the challenge now facing humanity to end  greenhouse gas emissions is so pressing that radical options need to be considered and developed alongside efforts to shift societies carbon neutral and shrink day to day emissions.

“What we do over the next 10 years will determine the future of humanity for the next 10,000 years. There is no major centre in the world that would be focused on this one big issue,” he told BBC News.

In an interview on the BBC Radio 4’s Today program, King said the center would need focus on scalable, low cost technologies that could be deployed to move the needle on the climate challenge.

Suggested ideas it could work to develop include geoengineering initiatives such as spraying sea water into the air at the north and south poles to reflect sunlight away and refreeze them; using fertilizer to regreen portions of the deep ocean to promote plankton growth; and carbon capture and storage methods to suck up and sequester greenhouse gases so they can’t contribute to accelerating global warming.

On the issue of nuclear power King said interesting work is being done to try to develop viable nuclear fusion technology — but also pointed to untapped capacity in renewable energy technologies, arguing there is an “ability to develop renewables far more than we thought before”.

If established, the Center for Climate Repair, would be attached to the university’s new Cambridge Carbon Neutral Futures Initiative, which is a research hub recently set up to link climate-related research work across the university — and “catalyse holistic, collaborative progress towards a sustainable future”, as it puts it.

“If [the Center for Climate Repair] goes forward, it will be part of the Carbon Neutral Futures Initiative, which is led by Dr Emily Shuckburgh,” a spokeswoman for the university confirmed.

“When considering how to tackle a problem as large, complex and urgent as climate change, we need to look at the widest possible range of ideas and to investigate radical innovations such as those proposed by Sir David,” said Shuckburgh, commenting on the proposal in a statement.

“In assessing such ideas we need to explore all aspects, including the technological advances required, the potential unintended consequences and side effects, the costs, the rules and regulations that would be needed, as well as the public acceptability.”


Source: The Tech Crunch

Read More

Flipkart ranked highly for ‘fairness’ of working conditions in India gig platform study

Posted by on Mar 26, 2019 in Apps, Asia, business models, Deliveroo, eCommerce, Europe, Flipkart, Germany, gig economy, India, Ola, online platforms, Oxford Internet Institute, South Africa, TC, Uber, United Kingdom, university of Manchester, workers rights | 0 comments

The Oxford Internet Institute has published what it bills as the world’s first rating system for working conditions on gig economy platforms.

The Fairwork academic research project is a collaboration with the International Institute of Information Technology Bangalore, the University of Cape Town, the University of Manchester, and the University of the Western Cape.

As the name suggests, the project focuses on conditions for workers who are being remotely managed by online platforms and their algorithms — creating a framework to score tech firms on factors like whether they pay gig economy workers the minimum wage and ensure their health and safety at work.

The two initial markets selected for piloting the rating system are India and South Africa, and the first batch of gig economy firms ranked includes a mix of delivery, ride-hailing and freelance work platforms, among others.

The plan is to update the rating yearly, and to also add gig economy platforms operating in the UK and Germany next year.

Fairness, rated

Fairwork’s gig platform scoring system measures performance per market across five standards — which are neatly condensed as: Fair pay, fair conditions, fair contracts, fair management, and fair representation.

Platforms are scored on each performance measure with a basic point and an advanced point, culminating in an overall score. (There’s more on the scoring methodology here.)

Most of the measures are self explanatory but the emphasis on fair contracts is for T&Cs to be “transparent, concise, and provided to workers in an accessible form”, with the contacting party subject to local law and identified in the contract.

While, in instances of what those behind the project dub “genuine” self-employment, terms of service must be free of clauses that “unreasonably exclude liability” on the part of the platform.

For fair management, a good rating demands a documented process and clear channel of communication through which workers can be heard; decisions can be appealed; and workers be informed of the reasons behind the decisions.

The use of any decision-making algorithms must also be transparent and result in “equitable outcomes for workers”. And there must also be identified and document policy to ensure equity in areas such as hiring and firing, while any data collection must be documented with a clear purpose and explicit informed consent.

Fair representation calls for platforms to allow workers to organize in collective bodies regardless of their employment status and be prepared to negotiate and co-operate with them.

Critical attention

Criticism of the so called ‘gig economy’ has dialled up in recent years, in Western markets especially, as the ‘flexible’ working claims platforms trumpet have attracted closer and more critical scrutiny.

Policymakers are acting on concerns that demand for casual labor is being exploited by increasingly powerful tech firms which are applying algorithms at scale while using self-serving employment classifications designed to workaround traditional labor rights so they can micromanage large-scale workforces remotely while sidestepping the costs of actually employing so many people.

Trenchant critics liken the result to a kind of modern day slavery — arguing that rights-denuded platform workers are part of a wider beaten down ‘precariat’.

A report last year by a UK MP was more nuanced but still likened the casual labor practices on UK startup Deliveroo’s food delivery platform to the kind of dual market seen in 20th century dockyards, suggesting that while the platform could work well for some gigging riders this was at the exploitative expense of others who were not preferred for jobs in the same way — with a risk of unpredictable and unstable earnings. 

In recent years a number of unions have stepped up activity to support contract and casual workers used by the sector, as the number of platform workers has grown. Even as gig platforms have generally continued to deny granting collective bargaining to their ‘self-employed’ workers.

Against this backdrop there have also been a number of wildcat style ‘strikes’ by gig economy workers in the UK triggered by sudden changes to pricing policies and/or conditions, or focused more broadly on trying to move the needle on pay and working conditions.

A UK union-backed attempt to use European human rights law to challenge Deliveroo’s refusal to grant collective bargaining rights for couriers was dismissed by the High Court at the end of last year. Though the union vowed to appeal.

Regardless of that particular set-back, pressure from policymakers and the publicity from legal challenges attached to workers rights have yielded a number of improvements for gig workers in Europe, with — for example — Uber announcing it would expand free insurance products for drivers across much of the region last year. And it’s clear that scrutiny of platforms is an important lever for improving conditions for workers.

It’s with that in mind that the researchers behind Fairwork have launched their rating system.

“The Fairwork rating system shines a light on best and worst practice in the platform economy,” said Mark Graham, professor of Internet geography at the University of Oxford, commenting in a statement. “This is an area in which for too long, very few regulations have been in place to protect workers. These ratings will enable consumers to make informed choices about the platforms and services they need when ordering a cab, a takeaway or outsourcing a simple task.”

“Our hope is that our five areas of fairness will take a life of their own, and that workers, platforms and other advocates will start using them to improve the working conditions across the platform economy,” he added.

And now to those first year scores in India and South Africa…

Best and worst performers

In India, ecommerce giant Flipkart came out on top of the companies ranked, with its delivery and logistics arm eKart scoring 7/10.

Though — if it wants to get a perfect 10 — it’s still got work to do on contracts, to improve clarity and ensure they reflect the true nature of the relationship, according to the researchers’ assessment.

Flipkart also does not recognize a body that could support collective bargaining for its workers.

Three tech platforms shared the wooden spoon for the worst conditions for Indian gig workers, according to the researchers’ assessment — namely: Food delivery platform Foodpanda and ride-hailing giants Ola and Uber which scored just 2/10 apiece, fulfilling only the minimum wage criteria and failing on every other measure.

UberEats, Uber’s food delivery operation, did slightly better — scoring 3/10 in India, thanks to also offering a due process for decisions affecting workers.

While in South Africa the top scorer was white collar work platform NoSweat, which got 8/10. On the improvements front, it also could do a little more work to make its contracts fairer, and also doesn’t recognize collective bargaining.

Bottom of the list in the country is ride-hailing firm Bolt (Taxify) — which scored 4/10, hitting targets on pay and some conditions (mitigating task-specific risks), while also offering a due process for decisions affecting workers, but failing on other performance measures.

Uber didn’t do much better in South Africa either — coming in second to last, with 5/10. Though it’s notable the company does offer more protections for workers there vs those grafting on its platform in India, including mitigating task-specific risks and actively seeking to improve conditions (such as by offering insurance).

Reached for comment on its Fairwork ratings, an Uber spokesperson sent this statement:

Uber wouldn’t be what it is without drivers — they are at the heart of the Uber experience. Over the past years we have made a number of changes to offer a better experience with more support and more protection, including our Partner Injury Protection programme, new safety features and access to quality and affordable private healthcare coverage for driver-partners and their families. We will continue to work hard to earn our partners trust and ensure that their voices are heard as we take Uber forward together.

There’s clearly no one universal standard for Uber’s business where working conditions are concerned. Instead the company tunes its standard to the local regulatory climate — offering workers less where it believes it can get away with it.

That too suggests a stronger spotlight on conditions offered by gig economy platforms can help improve workers’ lot and raise standards globally.

On the improvements front the Fairwork researchers claim the project has already led to positive impacts in the two pilot markets — claiming discussions are “ongoing” with platforms in India about implementing changes in line with the principles, including with a platform that has some 450,000 workers.

Though they also point out the first-year ranking show the overwhelming majority of India’s platform workers are engaged on platforms that score below their Fairwork basic standards (with scores <5/10) — which covers more than a million gig economy workers.

In South Africa another positive development they point to is alcohol delivery platform Bottles committing to supporting the emergence of fair workers’ representation on its platform, after collaborating with the project.

The local NoSweat freelance work platform has also introduced what the researchers couch as “significant changes” in all five areas of fairness — now having a formal policy to pay over the minimum wage after workers’ costs are taken into account; a clear process to ensure clients on the platform agree to protect workers’ health and safety; and a channel and process for workers to lodge grievance about conditions.

Commenting in a statement, Wilfred Greyling, co-founder of NoSweat said the project had helped the company “formalise” the principles and incorporate them into its systems. “NoSweat Work believes firmly in a fair deal for all parties involved in any work we put out,” he said, adding that the platform is “built on people and relationships; we never hide behind faceless technology”.

This report was updated with comment from Uber


Source: The Tech Crunch

Read More

XGenomes is bringing DNA sequencing to the masses

Posted by on Mar 15, 2019 in 23andMe, Biology, biotechnology, DNA sequencing, founder, genome, Genomics, george church, harvard, healthcare, Illumina, lasers, Life Sciences, TC, United Kingdom, United States, Y Combinator | 0 comments

As healthcare moves toward genetically tailored treatments, one of the biggest hurdles to truly personalized medicine is the lack of fast, low-cost genetic testing.

And few people are more familiar with the problems of today’s genetic diagnostics tools than Kalim Mir, the 52-year-old founder of XGenomes, who has spent his entire professional career studying the human genome.

Ultimately genomics is going to be the foundation for healthcare,” says Mir. “For that we need to move toward a sequencing of populations.” And population-scale gene sequencing is something that current techniques are unable to achieve. 

“If we’re talking about population scale sequencing with millions of people we just don’t have the throughput,” Mir says.

That’s why he started XGenomes, which is presenting as part of the latest batch of Y Combinator companies next week.

A visiting scientist in Harvard Medical School’s Department of Genetics, Mir worked with the famed Harvard professor George Church on a new kind of gene sequencing technology that promised to conduct sequencing at higher speeds and far lower costs than anything that was on the market.

The costs of sequencing a genome have come down significantly in the 19 years since the Human Genome Project successfully completed its project for $1 billion.

These days, gene sequencing can take a couple of days and cost around $1,000, Mir says. But with XGenomes, Mir hopes to drive the cost of testing down even further.

“We developed a way where we’re sequencing directly on the DNA where we’re not manipulating it except for opening up the double helix,” says Mir. 

Running a startup focused on conducting gene sequencing at population scales is not where Mir thought he’d be when he was growing up in Yorkshire in Northern England. “When I was in school there, I was not into science or tech. I was interested in literature,” he recalls.

That changed when he read Aldous Huxley’s Brave New World and began thinking about the implications of genetic manipulation that the book presented.

Mir went on to study molecular biology at Queen Mary College and upon graduation worked in a biotech company in the U.S.

After returning to England to complete his doctorate in the mid-90s, Mir worked with the geneticist Edwin Southern on the foundational science that now form the core of testing technologies like 23andMe, Illumina, and Affymetrix.

Xgenomes technology works by unzipping strands of DNA and then sequencing the strands concurrently.

I like to think of the genome as a book. The genome has chapters and the chapters could be the chromosomes,” says Mir. “Current technologies read it letter by letter. [But] we’re recognizing words.”

The company is able to accomplish this feat by using optical imaging technologies. Samples are treated with reagents that are then excited by lasers. XGenomes tech then “reads” the bits of DNA that are highlighted and identifies them.

Using this new tech, Mir thinks he can ultimately sequence a full genome in one to two hours and for as little as $100.

That would be a sea change in the way that testing is conducted and could bring about the rapid throughput of sequencing that Mir says is needed to make the vision of truly personalized medicine a reality.


Source: The Tech Crunch

Read More

The “splinternet” is already here

Posted by on Mar 13, 2019 in alibaba, Asia, Baidu, belgium, Brussels, censorship, chief executive officer, China, Column, corbis, Dragonfly, Eric Schmidt, eu commission, Facebook, firewall, Getty-Images, Google, great firewall, Information technology, Internet, internet access, Iran, Mark Zuckerberg, net neutrality, North Korea, online freedom, open Internet, photographer, russia, Saudi Arabia, search engines, South Korea, Sundar Pichai, Syria, Tencent, United Kingdom, United Nations, United States, Washington D.C., world wide web | 0 comments

There is no question that the arrival of a fragmented and divided internet is now upon us. The “splinternet,” where cyberspace is controlled and regulated by different countries is no longer just a concept, but now a dangerous reality. With the future of the “World Wide Web” at stake, governments and advocates in support of a free and open internet have an obligation to stem the tide of authoritarian regimes isolating the web to control information and their populations.

Both China and Russia have been rapidly increasing their internet oversight, leading to increased digital authoritarianism. Earlier this month Russia announced a plan to disconnect the entire country from the internet to simulate an all-out cyberwar. And, last month China issued two new censorship rules, identifying 100 new categories of banned content and implementing mandatory reviews of all content posted on short video platforms.

While China and Russia may be two of the biggest internet disruptors, they are by no means the only ones. Cuban, Iranian and even Turkish politicians have begun pushing “information sovereignty,” a euphemism for replacing services provided by western internet companies with their own more limited but easier to control products. And a 2017 study found that numerous countries, including Saudi Arabia, Syria and Yemen have engaged in “substantial politically motivated filtering.”

This digital control has also spread beyond authoritarian regimes. Increasingly, there are more attempts to keep foreign nationals off certain web properties.

For example, digital content available to U.K. citizens via the BBC’s iPlayer is becoming increasingly unavailable to Germans. South Korea filters, censors and blocks news agencies belonging to North Korea. Never have so many governments, authoritarian and democratic, actively blocked internet access to their own nationals.

The consequences of the splinternet and digital authoritarianism stretch far beyond the populations of these individual countries.

Back in 2016, U.S. trade officials accused China’s Great Firewall of creating what foreign internet executives defined as a trade barrier. Through controlling the rules of the internet, the Chinese government has nurtured a trio of domestic internet giants, known as BAT (Baidu, Alibaba and Tencent), who are all in lock step with the government’s ultra-strict regime.

The super-apps that these internet giants produce, such as WeChat, are built for censorship. The result? According to former Google CEO Eric Schmidt, “the Chinese Firewall will lead to two distinct internets. The U.S. will dominate the western internet and China will dominate the internet for all of Asia.”

Surprisingly, U.S. companies are helping to facilitate this splinternet.

Google had spent decades attempting to break into the Chinese market but had difficulty coexisting with the Chinese government’s strict censorship and collection of data, so much so that in March 2010, Google chose to pull its search engines and other services out of China. However now, in 2019, Google has completely changed its tune.

Google has made censorship allowances through an entirely different Chinese internet platform called project Dragonfly . Dragonfly is a censored version of Google’s Western search platform, with the key difference being that it blocks results for sensitive public queries.

Sundar Pichai, chief executive officer of Google Inc., sits before the start of a House Judiciary Committee hearing in Washington, D.C., U.S., on Tuesday, Dec. 11, 2018. Pichai backed privacy legislation and denied the company is politically biased, according to a transcript of testimony he plans to deliver. Photographer: Andrew Harrer/Bloomberg via Getty Images

The Universal Declaration of Human Rights states that “people have the right to seek, receive, and impart information and ideas through any media and regardless of frontiers.”

Drafted in 1948, this declaration reflects the sentiment felt following World War II, when people worked to prevent authoritarian propaganda and censorship from ever taking hold the way it once did. And, while these words were written over 70 years ago, well before the age of the internet, this declaration challenges the very concept of the splinternet and the undemocratic digital boundaries we see developing today.

As the web becomes more splintered and information more controlled across the globe, we risk the deterioration of democratic systems, the corruption of free markets and further cyber misinformation campaigns. We must act now to save a free and open internet from censorship and international maneuvering before history is bound to repeat itself.

BRUSSELS, BELGIUM – MAY 22: An Avaaz activist attends an anti-Facebook demonstration with cardboard cutouts of Facebook chief Mark Zuckerberg, on which is written “Fix Fakebook”, in front of the Berlaymont, the EU Commission headquarter on May 22, 2018 in Brussels, Belgium. Avaaz.org is an international non-governmental cybermilitating organization, founded in 2007. Presenting itself as a “supranational democratic movement,” it says it empowers citizens around the world to mobilize on various international issues, such as human rights, corruption or poverty. (Photo by Thierry Monasse/Corbis via Getty Images)

The Ultimate Solution

Similar to the UDHR drafted in 1948, in 2016, the United Nations declared “online freedom” to be a fundamental human right that must be protected. While not legally binding, the motion passed with consensus, and therefore the UN was provided limited power to endorse an open internet (OI) system. Through selectively applying pressure on governments who are not compliant, the UN can now enforce digital human rights standards.

The first step would be to implement a transparent monitoring system which ensures that the full resources of the internet, and ability to operate on it, are easily accessible to all citizens. Countries such as North Korea, China, Iran and Syria, who block websites and filter email plus social media communication, would be encouraged to improve through the imposition of incentives and consequences.

All countries would be ranked on their achievement of multiple positive factors including open standards, lack of censorship, and low barriers to internet entry. A three tier open internet ranking system would divide all nations into Free, Partly Free or Not Free. The ultimate goal would be to have all countries gradually migrate towards the Free category, allowing all citizens full information across the WWW, equally free and open without constraints.

The second step would be for the UN to align itself much more closely with the largest western internet companies. Together they could jointly assemble detailed reports on each government’s efforts towards censorship creep and government overreach. The global tech companies are keenly aware of which specific countries are applying pressure for censorship and the restriction of digital speech. Together, the UN and global tech firms would prove strong adversaries, protecting the citizens of the world. Every individual in every country deserves to know what is truly happening in the world.

The Free countries with an open internet, zero undue regulation or censorship would have a clear path to tremendous economic prosperity. Countries who remain in the Not Free tier, attempting to impose their self-serving political and social values would find themselves completely isolated, visibly violating digital human rights law.

This is not a hollow threat. A completely closed off splinternet will inevitably lead a country to isolation, low growth rates, and stagnation.


Source: The Tech Crunch

Read More

Online platforms need a super regulator and public interest tests for mergers, says UK parliament report

Posted by on Mar 11, 2019 in antitrust, Artificial Intelligence, competition law, Europe, Facebook, GDPR, General Data Protection Regulation, Mark Zuckerberg, ofcom, online platforms, Policy, Privacy, Social, UK government, United Kingdom | 0 comments

The latest policy recommendations for regulating powerful Internet platforms comes from a U.K. House of Lord committee that’s calling for an overarching digital regulator to be set up to plug gaps in domestic legislation and work through any overlaps of rules.

“The digital world does not merely require more regulation but a different approach to regulation,” the committee writes in a report published on Saturday, saying the government has responded to “growing public concern” in a piecemeal fashion, whereas “a new framework for regulatory action is needed”.

It suggests a new body — which it’s dubbed the Digital Authority — be established to “instruct and coordinate regulators”.

“The Digital Authority would have the remit to continually assess regulation in the digital world and make recommendations on where additional powers are necessary to fill gaps,” the committee writes, saying that it would also “bring together non-statutory organisations with duties in this area” — so presumably bodies such as the recently created Centre for Data Ethics and Innovation (which is intended to advise the UK government on how it can harness technologies like AI for the public good).

The committee report sets out ten principles that it says the Digital Authority should use to “shape and frame” all Internet regulation — and develop a “comprehensive and holistic strategy” for regulating digital services.

These principles (listed below) read, rather unfortunately, like a list of big tech failures. Perhaps especially given Facebook founder Mark Zuckerberg’s repeat refusal to testify before another UK parliamentary committee last year. (Leading to another highly critical report.)

  • Parity: the same level of protection must be provided online as offline
  • Accountability: processes must be in place to ensure individuals and organisations are held to account for their actions and policies
  • Transparency: powerful businesses and organisations operating in the digital world must be open to scrutiny
  • Openness: the internet must remain open to innovation and competition
  • Privacy: to protect the privacy of individuals
  • Ethical design: services must act in the interests of users and society
  • Recognition of childhood: to protect the most vulnerable users of the internet
  • Respect for human rights and equality: to safeguard the freedoms of expression and information online
  • Education and awareness-raising: to enable people to navigate the digital world safely
  • Democratic accountability, proportionality and evidence-based approach

“Principles should guide the development of online services at every stage,” the committee urges, calling for greater transparency at the point data is collected; greater user choice over which data are taken; and greater transparency around data use — “including the use of algorithms”.

So, in other words, a reversal of the ‘opt-out if you want any privacy’ approach to settings that’s generally favored by tech giants — even as it’s being challenged by complaints filed under Europe’s GDPR.

The UK government is due to put out a policy White Paper on regulating online harms this winter. But the Lords Communications Committee suggests the government’s focus is too narrow, calling also for regulation that can intervene to address how “the digital world has become dominated by a small number of very large companies”.

“These companies enjoy a substantial advantage, operating with an unprecedented knowledge of users and other businesses,” it warns. “Without intervention the largest tech companies are likely to gain more control of technologies which disseminate media content, extract data from the home and individuals or make decisions affecting people’s lives.”

The committee recommends public interest tests should therefore be applied to potential acquisitions when tech giants move in to snap up startups, warning that current competition law is struggling to keep pace with the ‘winner takes all’ dynamic of digital markets and their network effects.

“The largest tech companies can buy start-up companies before they can become competitive,” it writes. “Responses based on competition law struggle to keep pace with digital markets and often take place only once irreversible damage is done. We recommend that the consumer welfare test needs to be broadened and a public interest test should be applied to data-driven mergers.”

Market concentration also means a small number of companies have “great power in society and act as gatekeepers to the internet”, it also warns, suggesting that while greater use of data portability can help, “more interoperability” is required for the measure to make an effective remedy.

The committee also examined online platforms’ current legal liabilities around content, and recommends beefing these up too — saying self-regulation is failing and calling out social media sites’ moderation processes specifically as “unacceptably opaque and slow”.

High level political pressure in the UK recently led to a major Instagram policy change around censoring content that promotes suicide — though the shift was triggered after a public outcry related to the suicide of a young schoolgirl who had been exposed to pro-suicide content on Instagram years before.

Like other UK committees and government advisors, the Lords committee wants online services which host user-generated content to be subject to a statutory duty of care — with a special focus on children and “the vulnerable in society”.

“The duty of care should ensure that providers take account of safety in designing their services to prevent harm. This should include providing appropriate moderation processes to handle complaints about content,” it writes, recommending telecoms regulator Ofcom is given responsibility for enforcement.

“Public opinion is growing increasingly intolerant of the abuses which big tech companies have failed to eliminate,” it adds. “We hope that the industry will welcome our 10 principles and their potential to help restore trust in the services they provide. It is in the industry’s own long-term interest to work constructively with policy-makers. If they fail to do so, they run the risk of further action being taken.”


Source: The Tech Crunch

Read More

The next great debate will be about the role of tech in society and government

Posted by on Mar 10, 2019 in articles, Artificial Intelligence, basic income, chief technology officer, Column, economy, Energy, industrial, Lambda School, Obama, online courses, president, quantum computing, social security, United Kingdom, United States | 0 comments

The Industrial Revolution dramatically re-ordered the sociology of politics. In the US, the Populist Party in the United States was founded as a force in opposition to capitalism, wary of modernity. In the UK, the profound economic changes reshaped policy: from the Factory and Workers Act through to the liberal reforms of David Lloyd George, which ultimately laid the ground for the welfare state, the consequences were felt for the whole of the next century.

Today, another far-reaching revolution is underway, which is causing similar ripple effects. Populists of both left and right have risen in prominence and are more successful than their American forebearers at the turn of the 19th century, but similarly rejecting of modernisation. And in their search for scapegoats to sustain their success, tech is now firmly in their firing line.

The risk is that it sets back progress in an area that is yet to truly transform public policy. In the UK at least, the government machine looks little different from how it did when Lloyd George announced the People’s Budget in 1909.

The first politicians who master this tech revolution and shape it for the public goodwill determine what the next century will look like. Rapid developments in technologies such as gene-editing and Artificial Intelligence, as well as the quest for potential ground-breaking leaps forward in nuclear fission and quantum computing, will provoke significant changes to our economies, societies and politics.

Yet, today, very few are even asking the right questions, let alone providing answers. This is why I’m focusing on technology as the biggest single topic that policymakers need to engage with. Through my institute, I’m hoping to help curate the best thinking on these critical issues and devise politically actionable policy and strategy to deal with them. This will help put tech, innovation and investment in research and development at the forefront of the progressive programme. And we do so in the belief that tech is – and will continue to be – a generally positive force for society.

This is not to ignore the problems that surfaced as a result of these changes, because there are genuine issues around privacy and public interest.

NEW YORK, NY – APRIL 23: Monitors show imagery from security cameras seen at the Lower Manhattan Security Initiative on April 23, 2013 in New York City. At the counter-terrorism center, police and private security personel monitor more than 4,000 surveillance cameras and license plate readers mounted around the Financial District and surrounding parts of Lower Manhattan. Designed to identify potential threats it is modeled after London’s “Ring of Steel” system. (Photo by John Moore/Getty Images)

The shifts that have and will occur in the labour market as a result of automation will require far more thinking about governments’ role, as those who are likely to bear the brunt of it are those already feeling left behind. Re-training alone will not suffice, and lifelong investment in skills may be required. So too does a Universal Basic Income feel insufficient and a last resort, rather than an active, well-targeted policy solution.

“The first politicians who master this tech revolution and shape it for the public goodwill determine what the next century will look like.”

But pessimism is a poor guide to the future. It ends in conservativism in one form or another, whether that is simple statism, protectionism or nationalism. And so the challenge for those us of who believe in this agenda of harnessing the opportunities, while mitigating its risks is to put this in a way that connects with people’s lives. This should be a New Deal or People’s Budget type moment; a seismic change in public policy as we pivot to the future.

At the highest level this is about the role of the state in the 21st century, which needs to move away from ideological debates over size and spend and towards how it is re-ordered to meet the demands of people today. In the US, President Obama made some big strides with the role of the Chief Technology Officer, but it will require a whole rethinking of government’s modus operandi, so that it is able to keep up with the pace of change around it.

Photo courtesy of Shutterstock/Kheng Guan Toh

Across all the key policy areas we should be asking: how can tech be used to enable people to live their lives as they choose, increase their quality of life and deliver more opportunities to flourish and succeed?

For example, in education it will include looking at new models of teaching. Online courses have raised the possibility of changing the business of learning, while AI may be able to change the nature of teaching, providing more personalised platforms and free teachers to spend their time more effectively. It could also include new models of funding, such as the Lambda School, which present exciting possibilities for the future.

Similarly with health, the use of technology in diagnostics is well-documented. But it can be transformative in how we deploy our resources, whether that is freeing up more front-line staff to give them more time with patients, or even in how the whole model currently works. As it stands a huge amount of costs go on the last days of life and on the elderly. But far more focus should go on prevention and monitoring, so that people can lead longer lives, have less anxiety about ill health and lower the risks of illnesses becoming far more serious than they need to be. Technology, which can often feel so intangible, can be revolutionary in this regard.

In infrastructure and transport too, there are potentially huge benefits. Whether this is new and more efficient forms of transport or how we design our public space so that it works better for citizens. This will necessitate large projects to better connect communities, but also focus on small and simple solutions to everyday concerns that people have about their day to day lives, such as using sensors to collect data and improve services improve every day standard of living. The Boston Major’s office has been at the frontier of such thinking, and more thought must go into how we use data to improve tax, welfare, energy and the public good.

Achieving this will better align government with the pace of change that has been happening in society. As it stands, the two are out of sync and unless government catches up, the belief and trust in institutions to be seen to working for people will continue to fall. Populism thrives in this space. But the responsibility is not solely on politicians. It is not enough for those in the tech world to say they don’t get it.

Those working in the sector must help them to understand and support policy development, rather than allow misunderstandings and mistrust to compound. Because in little more than two decades, the digital revolution has dramatically altered the shape of our economies in society. This can continue, but only if companies work alongside governments to truly deliver the change that so many slogans aspire to.


Source: The Tech Crunch

Read More

Car alarms with security flaws put 3 million vehicles at risk of hijack

Posted by on Mar 8, 2019 in Alarms, api, Automotive, California, computer security, founder, Security, United Kingdom | 0 comments

Two popular car alarm systems have fixed security vulnerabilities that allowed researchers to remotely track, hijack and take control of vehicles with the alarms installed.

The systems, built by Russian alarm maker Pandora and California-based Viper — or Clifford in the U.K., were vulnerable to an easily manipulated server-side API, according to researchers at Pen Test Partners, a U.K. cybersecurity company. In their findings, the API could be abused to take control of an alarm system’s user account — and their vehicle.

It’s because the vulnerable alarm systems could be tricked into resetting an account password because the API was failing to check if it was an authorized request, allowing the researchers to log in.

Although the researchers bought alarms to test, they said “anyone” could create a user account to access any genuine account or extract all the companies’ user data.

The researchers said some three million cars globally were vulnerable to the flaws, since fixed.

In one example demonstrating the hack, the researchers geolocated a target vehicle, track it in real-time, follow it, remotely kill the engine and force the car to stop, and unlock the doors. The researchers said it was “trivially easy” to hijack a vulnerable vehicle. Worse, it was possible to identify some car models, making targeted hijacks or high-end vehicles even easier.

According to their findings, the researchers also found they could listen in on the in-car microphone, built-in as part of the Pandora alarm system for making calls to the emergency services or roadside assistance.

Ken Munro, founder of Pen Test Partners, told TechCrunch this was their “biggest” project.

The researchers contacted both Pandora and Viper with a seven-day disclosure period, given the severity of the vulnerabilities. Both companies responded quickly to fix the flaws.

When reached, Viper’s Chris Pearson confirmed the vulnerability has been fixed. “If used for malicious purposes, [the flaw] could allow customer’s accounts to be accessed without authorization.”

Viper blamed a recent system update by a service provider for the bug and said the issue was “quickly rectified.”

“Directed believes that no customer data was exposed and that no accounts were accessed without authorization during the short period this vulnerability existed,” said Pearson, but provided no evidence to how the company came to that conclusion.

In a lengthy email, Pandora’s Antony Noto challenged several of the researcher’s findings, summated: “The system’s encryption was not cracked, the remotes where not hacked, [and] the tags were not cloned,” he said. “A software glitch allowed temporary access to the device for a short period of time, which has now been addressed.”

The research follows work last year by Vangelis Stykas on the Calamp, a telematics provider that serves as the basis for Viper’s mobile app. Stykas, who later joined Pen Test Partners and also worked on the car alarm project, found the app was using credentials hardcoded in the app to login to a central database, which gave anyone who logged in remote control of a connected vehicle.


Source: The Tech Crunch

Read More

UK Far Right activist circumvents Facebook ban to livestream threats

Posted by on Mar 5, 2019 in Alex Jones, Europe, Facebook, far right, Google, hate speech, online platforms, Policy, Social, Social Media, social media platforms, social media tools, Stephen Yaxley-Lennon, Tommy Robinson, United Kingdom, YouTube | 0 comments

Stephen Yaxley-Lennon, a Far Right UK activist who was permanently banned from Facebook last week for repeatedly breaching its community standards on hate speech, was nonetheless able to use its platform to livestream harassment of an anti-fascist blogger whom he doorstepped at home last night.

UK-based blogger Mike Stuchbery detailed the intimidating incident in a series of tweets earlier today, writing that Yaxley-Lennon appeared to have used a friend’s Facebook account to circumvent the ban on his own Facebook and Instagram pages.

In recent years Yaxley-Lennon, who goes by the moniker ‘Tommy Robinson’ on social media, has used online platforms to raise his profile and solicit donations to fund Far Right activism.

He has also, in the case of Facebook and Twitter, fallen foul of mainstream tech platforms’ community standards which prohibit use of their tools for hate speech and intimidation. Earning himself a couple of bans. (At the time of writing Yaxley-Lennon has not been banned from Google-owned YouTube .)

Though circumventing Facebook’s ban appears to have been trivially easy for Yaxley-Lennon, who, as well as selling himself as a Far Right activist called “Tommy Robinson”, previously co-founded the Islamophobic Far Right pressure group, the English Defence League.

Giving an account of being doorstepped by Yaxley-Lennon in today’s Independent, Stuchbery writes: “The first we knew of it was a loud, frantic rapping on my door at around quarter to 11 [in the evening]… That’s when notifications began to buzz on my phone — message requests on Facebook pouring in, full of abuse and vitriol. “Tommy” was obviously livestreaming his visit, using a friend’s Facebook account to circumvent his ban, and had tipped off his fans.”

A repost (to YouTube) of what appears to be a Facebook Live stream of the incident corroborates Stuchbery’s account, showing Yaxley-Lennon outside a house at night where can be seen shouting for “Mike” to come out and banging on doors and/or windows.

At another point in the same video Yaxley-Lennon can be seen walking away when he spots a passerby and engages them in conversation. During this portion of the video Yaxley-Lennon publicly reveals Stuchbery’s address — a harassment tactic that’s known as doxxing.

He can also be heard making insinuating remarks to the unidentified passerby about what he claims are Stuchbery’s “wrong” sexual interests.

In another tweet today Stuchbery describes the remarks are defamatory, adding that he now intends to sue Yaxley-Lennon.

Stuchbery has also posted several screengrabs to Twitter, showing a number of Facebook users who he is not connected to sending him abusive messages — presumably during the livestream.

During the video Yaxley-Lennon can also be heard making threats to return, saying: “Mike Stuchbery. See you soon mate, because I’m coming back and back and back and back.”

In a second livestream, also later reposted to YouTube, Yaxley-Lennon can be heard apparently having returned a second time to Stuchbery’s house, now at around 5am, to cause further disturbance.

Stuchbery writes that he called the police to report both visits. In another tweet he says they “eventually talked ‘Tommy’ into leaving, but not before he gave my full address, threatened to come back tomorrow, in addition to making a documentary ‘exposing me’”.

We reached out to Bedfordshire Police to ask what it could confirm about the incidents at Stuchbery’s house and the force’s press office told us it had received a number of enquiries about the matter. A spokeswoman added that it would be issuing a statement later today. We’ll update this post when we have it.  

Stuchbery also passed us details of the account he believes was used to livestream the harassment — suggesting it’s linked to another Far Right activist, known by the moniker ‘Danny Tommo’, who was also banned by Facebook last week.

Though the Facebook account in question was using a different moniker — ‘Jack Dawkins’. This suggests, if the account did indeed belong to the same banned Far Right activist, he was also easily able to circumvent Facebook’s ban by creating a new account with a different (fake) name and email.

We passed the details of the ‘Jack Dawkins’ account to Facebook and since then the company appears to have suspended the account. (A message posted to it earlier today claimed it had been hacked.)

The fact of Yaxley-Lennon being able to use Facebook to livestream harassment a few days after he was banned underlines quite how porous Facebook’s platform remains for organized purveyors of hate and harassment. Studies of Facebook’s platform have previously suggested as much.

Which makes high profile ‘Facebook bans’ of hate speech activists mostly a crisis PR exercise for the company. And indeed easy PR for Far Right activists who have been quick to seize on and trumpet social media bans as ‘evidence’ of mainstream censorship of their point of view — liberally ripping from the playbook of US hate speech peddlers, such as the (also ‘banned’) InfoWars conspiracy theorist Alex Jones. Such as by posting pictures of themselves with their mouths gagged with tape.

Such images are intended to make meme-able messages for their followers to share. But the reality for social media savvy hate speech activists like Jones and Yaxley-Lennon looks nothing like censorship — given how demonstrably easy it remains for them to circumvent platform bans and carry on campaigns of hate and harassment via mainstream platforms.

We reached out to Facebook for a response to Yaxley-Lennon’s use of its livestreaming platform to harass Stuchbery, and to ask how it intends to prevent banned Far Right activists from circumventing bans and carrying on making use of its platform.

The company declined to make a public statement, though it did confirm the livestream had been flagged as violating its community standards last night and was removed afterwards. It also said it had deleted one post by a user for bullying. It added that it has content and safety teams which work around the clock to monitor Live videos flagged for review by Facebook users.

It did not confirm how long Yaxley-Lennon’s livestream was visible on its platform.

Stuchbery, a former history teacher, has garnered attention online writing about how Far Right groups have been using social media to organize and crowdfund ‘direct action’ in the offline world, including by targeting immigrants, Muslims, politicians and journalists in the street or on their own doorsteps.

But the trigger for Stuchbery being personally targeted by Yaxley-Lennon appears to be a legal letter served to the latter’s family home at the weekend informing him he’s being sued for defamation.

Stuchbery has been involved in raising awareness about the legal action, including promoting a crowdjustice campaign to raise funds for the suit.

The litigation relates to allegations Yaxley-Lennon made online late last year about a 15-year-old Syrian refugee schoolboy called Jamal who was shown in a video that went viral being violently bullied by white pupils at his school in Northern England.

Yaxley-Lennon responded to the viral video by posting a vlog to social media in which he makes a series of allegations about Jamal. The schoolboy’s family have described the allegations as defamatory. And the crowdjustice campaign promoted by Stuchbery has since raised more than £10,000 to sue Yaxley-Lennon.

The legal team pursuing the defamation litigation has also written that it intends to explore “routes by which the social media platforms that provide a means of dissemination to Lennon can also be attached to this action”.

The video of Yaxley-Lennon making claims about Jamal can still be found on YouTube. As indeed can Yaxley-Lennon’s own channel — despite equivalent pages having been removed from Facebook and Twitter (the latter pulled the plug on Yaxley-Lennon’s account a year ago).

We asked YouTube why it continues to provide a platform for Yaxley-Lennon to amplify hate speech and solicit donations for campaigns of targeted harassment but the company declined to comment publicly on the matter.

It did point out it demonetized Yaxley-Lennon’s channel last month, having determined it breaches its advertising policies.

YouTube also told us that it removes any video content that violates its hate speech policies — which do prohibit the incitement of violence or hatred against members of a religious community.

But by ignoring the wider context here — i.e. Yaxley-Lennon’s activity as a Far Right activist — and allowing him to continue broadcasting on its platform YouTube is leaving the door open for dog whistle tactics to be used to signal to and stir up ‘in the know’ followers — as was the case with another Internet savvy operator, InfoWars’ Alex Jones (until YouTube eventually terminated his channel last year).

Until last week Facebook was also ignoring the wider context around Yaxley-Lennon’s Far Right activism — a decision that likely helped him reach a wider audience than he would otherwise have been able to. So now Facebook has another full-blown hate speech ‘influencer’ going rogue on its platform and being cheered by an audience of followers its tools helped amass.

There is, surely, a lesson here.

Yet it’s also clear mainstream platforms are unwilling to pro-actively and voluntarily adapt their rules to close down malicious users who seek to weaponize social media tools to spread hate and sew division via amplified harassment.

But if platforms won’t do it, it’ll be left to governments to curb social media’s antisocial impacts with regulation.

And in the UK there is now no shortage of appetite to try; the government has a White Paper on social media and safety coming this winter. While the official opposition has said it wants to create a new regulator to rein in online platforms and even look at breaking up tech giants. So watch this space.

Public attitudes to (anti)social media have certainly soured — and with livestreams of hate and harassment it’s little wonder.

“Perhaps the worst thing, in the cold light of day, is the near certainty that the “content” “Tommy” produced during his stunt will now be used as a fundraising tool,” writes Stuchbery, concluding his account of being on the receiving end of a Facebook Live spewing hate and harassment. “If you dare to call him out on his cavalcade of hate, he usually tries to monetize you. It is a cruel twist.

“But most of all, I wonder how we got in this mess. I wonder how we got to a place where those who try to speak out against hatred and those who peddle it are threatened at their homes. I despair at how social media has become a weapon wielded by some, seemingly with impunity, to silence.”


Source: The Tech Crunch

Read More

Researchers obtain a command server used by North Korean hacker group

Posted by on Mar 4, 2019 in computer security, cyberattacks, Cyberwarfare, Europe, Government, Hack, hacker, malware, McAfee, North Korea, Security, Sony, United Kingdom, United States | 0 comments

In a rare move, government officials have handed security researchers a seized server believed to be used by North Korean hackers to launch dozens of targeted attacks last year.

Known as Operation Sharpshooter, the server was used to deliver a malware campaign targeting governments, telecoms, and defense contractors — first uncovered in December. The hackers sent malicious Word document by email that would when opened run macro-code to download a second-stage implant, dubbed Rising Sun, which the hackers used to conduct reconnaissance and steal user data.

The Lazarus Group, a hacker group linked to North Korea, was the prime suspect given the overlap with similar code previously used by hackers, but a connection was never confirmed.

Now, McAfee says it’s confident to make the link.

“This was a unique first experience in all my years of threat research and investigations,” said Christiaan Beek, lead scientist and senior principal engineer at McAfee, told TechCrunch in an email. “In having visibility into an adversary’s command-and-control server, we were able to uncover valuable information that lead to more clues to investigate,” he said.

The move was part of an effort to better understand the threat from the nation state, which has in recent years been blamed for the 2016 Sony hack and the WannaCry ransomware outbreak in 2017, as well as more targeted attacks on global businesses.

In the new research seen by TechCrunch out Sunday, the security firm’s examination of the server code revealed Operation Sharpshooter was operational far longer than first believed — dating back to September 2017 — and targeted a broader range of industries and countries, including financial services and critical infrastructure in Europe, the U.K. and the U.S.

The modular command and control structure of the Rising Sun malware. (Image: McAfee)

The research showed that server, operating as the malware’s command and control infrastructure, was written in the PHP and ASP web languages, used for building websites and web-based applications, making it easily deployed and highly scalable.

The back-end has several components used to launch attacks on the hackers’ targets. Each component has a specific role, such as the implant downloader, which hosts and pulls the implant from another downloader; and the the command interpreter, which operates the Rising Sun implant through an intermediate hacked server to help hide the wider command structure.

The researchers say that the hackers use a factory-style approach to building the Rising Sun, a modular type of malware that was pieced together different components over several years. “These components appear in various implants dating back to 2016, which is one indication that the attacker has access to a set of developed functionalities at their disposal,” said McAfee’s research. The researchers also found a “clear evolutionary” path from Duuzer, a backdoor used to target South Korean computers as far back as 2015, and also part of the same family of malware used in the Sony hack, also attributed to North Korea.

Although the evidence points to the Lazarus Group, evidence from the log files show a batch of IP addresses purportedly from Namibia, which researchers can’t explain.

“It is quite possible that these unobfuscated connections may represent the locations that the adversary is operating from or testing in,” the research said. “Equally, this could be a false flag,” such as an effort to cause confusion in the event that the server is compromised.

The research represents a breakthrough in understanding the adversary behind Operation Sharpshooter. Attribution of cyberattacks is difficult at best, a fact that security researchers and governments alike recognize, given malware authors and threat groups share code and leave red herrings to hide their identities. But obtaining a command and control server, the core innards of a malware campaign, is telling.

Even if the goals of the campaign are still a mystery, McAfee’s chief scientist Raj Samani said the insight will “give us deeper insights in investigations moving forward.”


Source: The Tech Crunch

Read More