Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Reality Check: The marvel of computer vision technology in today’s camera-based AR systems

Posted by on May 15, 2019 in Animation, AR, ar/vr, Artificial Intelligence, Augmented Reality, Column, Computer Vision, computing, Developer, digital media, Gaming, gif, Global Positioning System, gps, mobile phones, neural network, starbucks, TC, virtual reality, VR | 0 comments

British science fiction writer, Sir Arther C. Clark, once said, “Any sufficiently advanced technology is indistinguishable from magic.”

Augmented reality has the potential to instill awe and wonder in us just as magic would. For the very first time in the history of computing, we now have the ability to blur the line between the physical world and the virtual world. AR promises to bring forth the dawn of a new creative economy, where digital media can be brought to life and given the ability to interact with the real world.

AR experiences can seem magical but what exactly is happening behind the curtain? To answer this, we must look at the three basic foundations of a camera-based AR system like our smartphone.

  1. How do computers know where it is in the world? (Localization + Mapping)
  2. How do computers understand what the world looks like? (Geometry)
  3. How do computers understand the world as we do? (Semantics)

Part 1: How do computers know where it is in the world? (Localization)

Mars Rover Curiosity taking a selfie on Mars. Source: https://www.nasa.gov/jpl/msl/pia19808/looking-up-at-mars-rover-curiosity-in-buckskin-selfie/

When NASA scientists put the rover onto Mars, they needed a way for the robot to navigate itself on a different planet without the use of a global positioning system (GPS). They came up with a technique called Visual Inertial Odometry (VIO) to track the rover’s movement over time without GPS. This is the same technique that our smartphones use to track their spatial position and orientation.

A VIO system is made out of two parts.


Source: The Tech Crunch

Read More

Microsoft open-sources a crucial algorithm behind its Bing Search services

Posted by on May 15, 2019 in Artificial Intelligence, Bing, Cloud, computing, Developer, Microsoft, open source software, search results, Software, windows phone, world wide web | 0 comments

Microsoft today announced that it has open-sourced a key piece of what makes its Bing search services able to quickly return search results to its users. By making this technology open, the company hopes that developers will be able to build similar experiences for their users in other domains where users search through vast data troves, including in retail, though in this age of abundant data, chances are developers will find plenty of other enterprise and consumer use cases, too.

The piece of software the company open-sourced today is a library Microsoft developed to make better use of all the data it collected and AI models it built for Bing .

“Only a few years ago, web search was simple. Users typed a few words and waded through pages of results,” the company notes in today’s announcement. “Today, those same users may instead snap a picture on a phone and drop it into a search box or use an intelligent assistant to ask a question without physically touching a device at all. They may also type a question and expect an actual reply, not a list of pages with likely answers.”

With the Space Partition Tree and Graph (SPTAG) algorithm that is at the core of the open-sourced Python library, Microsoft is able to search through billions of pieces of information in milliseconds.

Vector search itself isn’t a new idea, of course. What Microsoft has done, though, is apply this concept to working with deep learning models. First, the team takes a pre-trained model and encodes that data into vectors, where every vector represents a word or pixel. Using the new SPTAG library, it then generates a vector index. As queries come in, the deep learning model translates that text or image into a vector and the library finds the most related vectors in that index.

“With Bing search, the vectorizing effort has extended to over 150 billion pieces of data indexed by the search engine to bring improvement over traditional keyword matching,” Microsoft says. “These include single words, characters, web page snippets, full queries and other media. Once a user searches, Bing can scan the indexed vectors and deliver the best match.”

The library is now available under the MIT license and provides all of the tools to build and search these distributed vector indexes. You can find more details about how to get started with using this library — as well as application samples — here.


Source: The Tech Crunch

Read More

GitHub gets a package registry

Posted by on May 10, 2019 in computing, Developer, Git, GitHub, Java, Javascript, npm, ruby, Software, TC, version control | 0 comments

GitHub today announced the launch of a limited beta of the GitHub Package Registry, its new package management service that lets developers publish public and private packages next to their source code.

To be clear, GitHub isn’t launching a competitor to tools like npm or RubyGems. What the company is launching, however, is a service that is compatible with these tools and allows developers to find and publish their own packages, using the same GitHub interface they use for their code. The new service is currently compatible with JavaScript (npm), Java (Maven), Ruby (RubyGems), .NET (NuGet) and Docker images, with support for other languages and tools to come.

GitHub Package Registry is compatible with common package management clients, so you can publish packages with your choice of tools,” Simina Pasat, director of Product Management at GitHub, explains in today’s announcement. “If your repository is more complex, you’ll be able to publish multiple packages of different types. And, with webhooks or with GitHub Actions, you can fully customize your publishing and post-publishing workflows.”With this, businesses can then also provide their employees with a single set of credentials to manage both their code and packages — and this new feature makes it easy to create a set of approved packages, too. Users will also get download statistics and access to the entire history of the package on GitHub.

Most open-source packages already use GitHub to develop their code before they publish it to a public registry. GitHub argues that these developers can now also use the GitHub Package Registry to publish pre-release versions, for example.

Developers already often use GitHub to host their private repositories. After all, it makes sense to keep packages and code in the same place. What GitHub is doing here, to some degree, is formalize this practice and wrap a product around it.


Source: The Tech Crunch

Read More

Binance pledges to ‘significantly’ increase security following $40M Bitcoin hack

Posted by on May 10, 2019 in articles, Binance, Bitcoin, blockchain, ceo, computing, cryptocurrencies, cryptocurrency, digital currencies, phishing | 0 comments

Binance has vowed to raise the quality of its security in the aftermath of a hack that saw thieves make off with over $40 million in Bitcoin from the exchange.

The company — which is widely believed to operate the world’s largest crypto exchange based on trading volumes — said today that it will “significantly revamp” its security measures, procedures and practices in response. In particular, CEO Changpeng Zhao wrote in a blog post that Binance will make “significant changes to the API, 2FA, and withdrawal validation areas, which was an area exploited by hackers during this incident.”

Speaking on a livestream following the disclosure of the hack earlier this week, Zhao said the hackers had been “very patient” and, in addition to targeting high-net-worth Binance users, he suggested that attack had used both internal and external vectors. That might well mean phishing, and that’s an area where Zhao has pledged to work on “more innovative ways” to combat threats, alongside improved KYC and better user and threat analysis.

“We are working with a dozen or so industry-leading security expert teams to help improve our security as well as track down the hackers,” Zhao wrote. He added that other exchanges are helping as best they can to track and freeze the stolen assets.

The real focus must be to look forward, and in that spirit, Binance said it will soon add support for hardware-based two-factor-authentication keys as a method to log in to its site.

That’s probably long overdue and, perhaps to make up for the delay, Zhao said the company plans to give away 1,000 YubiKeys when the feature goes live. That’s a worthy gesture but, unless Binance is giving out a discount code to redeem on the website directly, security purists would likely recommend users to buy their own key to ensure it has not been tampered with.

The final notable update is when Binance will resume withdrawals and deposits, which it froze in the wake of the attack. There’s no definitive word on that yet, with Zhao suggesting that the timeframe is “early next week.”

Oh, and on that proposed Bitcoin blockchain “reorg” — which attracted a mocking reaction from many in the blockchain space — Zhao, who is also known as CZ, said he is sorry.

“It is my strong view that our constant and transparent communication is what sets us apart from the “old way of doing things”, even and especially in tough times,” he wrote defiantly, adding that he doesn’t intend to reduce his activity on Twitter — where is approaching 350,000 followers.


Source: The Tech Crunch

Read More

Index Ventures, Stripe back bookkeeping service Pilot with $40M

Posted by on Apr 18, 2019 in computing, Dropbox, Finance, funding, Index Ventures, jessica mckellar, ksplice, linux, MIT, oracle, San Francisco, Software, Startup company, Startups, stripe, Waseem Daher, zulip | 0 comments

Five years after Dropbox acquired their startup Zulip, Waseem Daher, Jeff Arnold and Jessica McKellar have gained traction for their third business together: Pilot.

Pilot helps startups and small businesses manage their back office. Chief executive officer Daher admits it may seem a little boring, but the market opportunity is undeniably huge. To tackle the market, Pilot is today announcing a $40 million Series B led by Index Ventures with participation from Stripe, the online payment processing system.

The round values Pilot, which has raised about $60 million to date, at $355 million.

“It’s a massive industry that has sucked in the past,” Daher told TechCrunch. “People want a really high-quality solution to the bookkeeping problem. The market really wants this to exist and we’ve assembled a world-class team that’s capable of knocking this out of the park.”

San Francisco-based Pilot launched in 2017, more than a decade after the three founders met in MIT’s student computing group. It’s not surprising they’ve garnered attention from venture capitalists, given that their first two companies resulted in notable acquisitions.

Pilot has taken on a massively overlooked but strategic segment — bookkeeping,” Index’s Mark Goldberg told TechCrunch via email. “While dry on the surface, the opportunity is enormous given that an estimated $60 billion is spent on bookkeeping and accounting in the U.S. alone. It’s a service industry that can finally be automated with technology and this is the perfect team to take this on — third-time founders with a perfect combo of financial acumen and engineering.”

The trio of founders’ first project, Linux upgrade software called Ksplice, sold to Oracle in 2011. Their next business, Zulip, exited to Dropbox before it even had the chance to publicly launch.

It was actually upon building Ksplice that Daher and team realized their dire need for tech-enabled bookkeeping solutions.

“We built something internally like this as a byproduct of just running [Ksplice],” Daher explained. “When Oracle was acquiring our company, we met with their finance people and we described this system to them and they were blown away.”

It took a few years for the team to refocus their efforts on streamlining back-office processes for startups, opting to build business chat software in Zulip first.

Pilot’s software integrates with other financial services products to bring the bookkeeping process into the 21st century. Its platform, for example, works seamlessly on top of QuickBooks so customers aren’t wasting precious time updating and managing the accounting application.

“It’s better than the slow, painful process of doing it yourself and it’s better than hiring a third-party bookkeeper,” Daher said. “If you care at all about having the work be high-quality, you have to have software do it. People aren’t good at these mechanical, repetitive, formula-driven tasks.”

Currently, Pilot handles bookkeeping for more than $100 million per month in financial transactions but hopes to use the infusion of venture funding to accelerate customer adoption. The company also plans to launch a tax prep offering that they say will make the tax prep experience “easy and seamless.”

“It’s our first foray into Pilot’s larger mission, which is taking care of running your companies entire back office so you can focus on your business,” Daher said.

As for whether the team will sell to another big acquirer, it’s unlikely.

“The opportunity for Pilot is so large and so substantive, I think it would be a mistake for this to be anything other than a large and enduring public company,” Daher said. “This is the company that we’re going to do this with.”


Source: The Tech Crunch

Read More

Get ready for a new era of personalized entertainment

Posted by on Apr 13, 2019 in Amazon, Artificial Intelligence, Column, computing, Content, Facebook, machine learning, Marketing, Multimedia, personalization, smart devices, Spotify, Streaming Media, streaming services, Twitter, virtual reality, world wide web | 0 comments

New machine learning technologies, user interfaces and automated content creation techniques are going to expand the personalization of storytelling beyond algorithmically generated news feeds and content recommendation.

The next wave will be software-generated narratives that are tailored to the tastes and sentiments of a consumer.

Concretely, it means that your digital footprint, personal preferences and context unlock alternative features in the content itself, be it a news article, live video or a hit series on your streaming service.

The title contains different experiences for different people.

From smart recommendations to smarter content

When you use Youtube, Facebook, Google, Amazon, Twitter, Netflix or Spotify, algorithms select what gets recommended to you. The current mainstream services and their user interfaces and recommendation engines have been optimized to serve you content you might be interested in.

Your data, other people’s data, content-related data and machine learning methods are used to match people and content, thus improving the relevance of content recommendations and efficiency of content distribution.

However, so far the content experience itself has mostly been similar to everyone. If the same news article, live video or TV series episode gets recommended to you and me, we both read and watch the same thing, experiencing the same content.

That’s about to change. Soon we’ll be seeing new forms of smart content, in which user interface, machine learning technologies and content itself are combined in a seamless manner to create a personalized content experience.

What is smart content?

Smart content means that content experience itself is affected by who is seeing, watching, reading or listening to content. The content itself changes based on who you are.

We are already seeing the first forerunners in this space. TikTok’s whole content experience is driven by very short videos, audiovisual content sequences if you will, ordered and woven together by algorithms. Every user sees a different, personalized, “whole” based on her viewing history and user profile.

At the same time, Netflix has recently started testing new forms of interactive content (TV series episodes, e.g. Black Mirror: Bandersnatch) in which user’s own choices affect directly the content experience, including dialogue and storyline. And more is on its way. With Love, Death & Robots series, Netflix is experimenting with episode order within a series, serving the episodes in different order for different users.

Some earlier predecessors of interactive audio-visual content include sports event streaming, in which the user can decide which particular stream she follows and how she interacts with the live content, for example rewinding the stream and spotting the key moments based on her own interest.

Simultaneously, we’re seeing how machine learning technologies can be used to create photo-like images of imaginary people, creatures and places. Current systems can recreate and alter entire videos, for example by changing the style, scenery, lighting, environment or central character’s face. Additionally, AI solutions are able to generate music in different genres.

Now, imagine, that TikTok’s individual short videos would be automatically personalized by the effects chosen by an AI system, and thus the whole video would be customized for you. Or that the choices in the Netflix’s interactive content affecting the plot twists, dialogue and even soundtrack, were made automatically by algorithms based on your profile.

Personalized smart content is coming to news as well. Automated systems, using today’s state-of-the-art NLP technologies, can generate long pieces of concise, comprehensible and even inventive textual content at scale. At present, media houses use automated content creation systems, or “robot journalists”, to create news material varying from complete articles to audio-visual clips and visualizations. Through content atomization (breaking content into small modular chunks of information) and machine learning, content production can be increased massively to support smart content creation.

Say that a news article you read or listen to is about a specific political topic that is unfamiliar to you. When comparing the same article with your friend, your version of the story might use different concepts and offer a different angle than your friend’s who’s really deep into politics. A beginner’s smart content news experience would differ from the experience of a topic enthusiast.

Content itself will become a software-like fluid and personalized experience, where your digital footprint and preferences affect not just how the content is recommended and served to you, but what the content actually contains.

Automated storytelling?

How is it possible to create smart content that contains different experiences for different people?

Content needs to be thought and treated as an iterative and configurable process rather than a ready-made static whole that is finished when it has been published in the distribution pipeline.

Importantly, the core building blocks of the content experience change: smart content consists of atomized modular elements that can be modified, updated, remixed, replaced, omitted and activated based on varying rules. In addition, content modules that have been made in the past, can be reused if applicable. Content is designed and developed more like a software.

Currently a significant amount of human effort and computing resources are used to prepare content for machine-powered content distribution and recommendation systems, varying from smart news apps to on-demand streaming services. With smart content, the content creation and its preparation for publication and distribution channels wouldn’t be separate processes. Instead, metadata and other invisible features that describe and define the content are an integral part of the content creation process from the very beginning.

Turning Donald Glover into Jay Gatsby

With smart content, the narrative or image itself becomes an integral part of an iterative feedback loop, in which the user’s actions, emotions and other signals as well as the visible and invisible features of the content itself affect the whole content consumption cycle from the content creation and recommendation to the content experience. With smart content features, a news article or a movie activates different elements of the content for different people.

It’s very likely that smart content for entertainment purposes will have different features and functions than news media content. Moreover, people expect frictionless and effortless content experience and thus smart content experience differs from games. Smart content doesn’t necessarily require direct actions from the user. If the person wants, the content personalization happens proactively and automatically, without explicit user interaction.

Creating smart content requires both human curation and machine intelligence. Humans focus on things that require creativity and deep analysis while AI systems generate, assemble and iterate the content that becomes dynamic and adaptive just like software.

Sustainable smart content

Smart content has different configurations and representations for different users, user interfaces, devices, languages and environments. The same piece of content contains elements that can be accessed through voice user interface or presented in augmented reality applications. Or the whole content expands into a fully immersive virtual reality experience.

In the same way as with the personalized user interfaces and smart devices, smart content can be used for good and bad. It can be used to enlighten and empower, as well as to trick and mislead. Thus it’s critical, that human-centered approach and sustainable values are built in the very core of smart content creation. Personalization needs to be transparent and the user needs to be able to choose if she wants the content to be personalized or not. And of course, not all content will be smart in the same way, if at all.

If used in a sustainable manner, smart content can break filter bubbles and echo chambers as it can be used to make a wide variety of information more accessible for diverse audiences. Through personalization, challenging topics can be presented to people according to their abilities and preferences, regardless of their background or level of education. For example a beginner’s version of vaccination content or digital media literacy article uses gamification elements, and the more experienced user gets directly a thorough fact-packed account of the recent developments and research results.

Smart content is also aligned with the efforts against today’s information operations such as fake news and its different forms such as “deep fakes” (http://www.niemanlab.org/2018/11/how-the-wall-street-journal-is-preparing-its-journalists-to-detect-deepfakes). If the content is like software, a legit software runs on your devices and interfaces without a problem. On the other hand, even the machine-generated realistic-looking but suspicious content, like deep fake, can be detected and filtered out based on its signature and other machine readable qualities.


Smart content is the ultimate combination of user experience design, AI technologies and storytelling.

News media should be among the first to start experimenting with smart content. When the intelligent content starts eating the world, one should be creating ones own intelligent content.

The first players that master the smart content, will be among tomorrow’s reigning digital giants. And that’s one of the main reasons why today’s tech titans are going seriously into the content game. Smart content is coming.


Source: The Tech Crunch

Read More

Vizion.ai launches its managed Elasticsearch service

Posted by on Mar 28, 2019 in Amazon Web Services, api, Artificial Intelligence, Caching, cloud computing, computing, Developer, Elastic, Elasticsearch, Enterprise, ML, TC, world wide web | 0 comments

Setting up Elasticsearch, the open-source system that many companies large and small use to power their distributed search and analytics engines, isn’t the hardest thing. What is very hard, though, is to provision the right amount of resources to run the service, especially when your users’ demand comes in spikes, without overpaying for unused capacity. Vizion.ai’s new Elasticsearch Service does away with all of this by essentially offering Elasticsearch as a service and only charging its customers for the infrastructure they use.

Vizion.ai’s service automatically scales up and down as needed. It’s a managed service and delivered as a SaaS platform that can support deployments on both private and public clouds, with full API compatibility with the standard Elastic stack that typically includes tools like Kibana for visualizing data, Beats for sending data to the service and Logstash for transforming the incoming data and setting up data pipelines. Users can easily create several stacks for testing and development, too, for example.

Vizion.ai GM and VP Geoff Tudor

“When you go into the AWS Elasticsearch service, you’re going to be looking at dozens or hundreds of permutations for trying to build your own cluster,” Vision.ai’s VP and GM Geoff Tudor told me. “Which instance size? How many instances? Do I want geographical redundancy? What’s my networking? What’s my security? And if you choose wrong, then that’s going to impact the overall performance. […] We do balancing dynamically behind that infrastructure layer.” To do this, the service looks at the utilization patterns of a given user and then allocates resources to optimize for the specific use case.

What VVizion.ai hasdone here is take some of the work from its parent company Panzura, a multi-cloud storage service for enterprises that has plenty of patents around data caching, and applied it to this new Elasticsearch service.

There are obviously other companies that offer commercial Elasticsearch platforms already. Tudor acknowledges this, but argues that his company’s platform is different. With other products, he argues, you have to decide on the size of your block storage for your metadata upfront, for example, and you typically want SSDs for better performance, which can quickly get expensive. Thanks to Panzura’s IP, Vizion.ai is able to bring down the cost by caching recent data on SSDs and keeping the rest in cheaper object storage pools.

He also noted that the company is positioning the overall Vizion.ai service, with the Elasticsearch service as one of the earliest components, as a platform for running AI and ML workloads. Support for TensorFlow, PredictionIO (which plays nicely with Elasticsearch) and other tools is also in the works. “We want to make this an easy serverless ML/AI consumption in a multi-cloud fashion, where not only can you leverage the compute, but you can also have your storage of record at a very cost-effective price point.”


Source: The Tech Crunch

Read More

Beto O’Rourke could be the first hacker president

Posted by on Mar 15, 2019 in articles, computing, Government, hacker, hacking, hacktivism, president, Security, texas | 0 comments

Democratic presidential candidate Beto O’Rourke has revealed he was a member of a notorious decades-old hacking group.

The former congressman was a member of the Texas-based hacker group, the Cult of the Dead Cow, known for inspiring early hacktivism in the internet age and building exploits and hacks for Microsoft Windows. The group used the internet as a platform in the 1990s to protest real-world events, often to promote human rights and denouncing censorship. Among its many releases, the Cult of the Dead Cow was best known for its Back Orifice program, a remote access and administration tool.

O’Rourke went by the handle “Psychedelic Warlord,” as revealed by Reuters, which broke the story.

But as he climbed the political ranks, first elected to the El Paso city council in 2005, he reportedly grew concerned that his membership with the group would harm his political aspirations. The group’s members kept O’Rourke’s secret safe until the ex-hacker confirmed to Reuters his association with the group.

Reuters described him as the “most prominent ex-hacker in American political history,” who on Thursday announced his candidacy for president of the United States.

If he wins the White House, he would become the first hacker president.

O’Rourke’s history sheds light on how the candidate approaches and understands the technological issues that face the U.S. today. He’s one of the few presidential candidates to run for the White House with more than a modicum of tech knowledge — and the crucial awareness of the good and the problems tech can bring at a policy level.

“I understand the democratizing power of the internet, and how transformative it was for me personally, and how it leveraged the extraordinary intelligence of these people all over the country who were sharing ideas and techniques,” O’Rourke told Reuters.

The 46-year-old has yet to address supporters about the new revelations.


Source: The Tech Crunch

Read More

Apple ad focuses on iPhone’s most marketable feature — privacy

Posted by on Mar 14, 2019 in Apple, computing, digital media, digital rights, Facebook, Hardware, human rights, identity management, iPhone, law, Mobile, Privacy, TC, terms of service, Tim Cook, United States | 0 comments

Apple is airing a new ad spot in primetime today. Focused on privacy, the spot is visually cued, with no dialog and a simple tagline: Privacy. That’s iPhone.

In a series of humorous vignettes, the message is driven home that sometimes you just want a little privacy. The spot has only one line of text otherwise, and it’s in keeping with Apple’s messaging on privacy over the long and short term. “If privacy matters in your life, it should matter to the phone your life is on.”

The spot will air tonight in primetime in the U.S. and extend through March Madness. It will then air in select other countries.

You’d have to be hiding under a rock not to have noticed Apple positioning privacy as a differentiating factor between itself and other companies. Beginning a few years ago, CEO Tim Cook began taking more and more public stances on what the company felt to be your “rights” to privacy on their platform and how that differed from other companies. The undercurrent being that Apple was able to take this stance because its first-party business relies on a relatively direct relationship with customers who purchase its hardware and, increasingly, its services.

This stands in contrast to the model of other tech giants like Google or Facebook that insert an interstitial layer of monetization strategy on top of that relationship in the forms of application of personal information about you (in somewhat anonymized fashion) to sell their platform to advertisers that in turn can sell to you better.

Turning the ethical high ground into a marketing strategy is not without its pitfalls, though, as Apple has discovered recently with a (now patched) high-profile FaceTime bug that allowed people to turn your phone into a listening device, Facebook’s manipulation of App Store permissions and the revelation that there was some long overdue house cleaning needed in its Enterprise Certificate program.

I did find it interesting that the iconography of the “Private Side” spot very, very closely associates the concepts of privacy and security. They are separate, but interrelated, obviously. This spot says these are one and the same. It’s hard to enforce privacy without security, of course, but in the mind of the public I think there is very little difference between the two.

The App Store itself, of course, still hosts apps from Google and Facebook among thousands of others that use personal data of yours in one form or another. Apple’s argument is that it protects the data you give to your phone aggressively by processing on the device, collecting minimal data, disconnecting that data from the user as much as possible and giving users as transparent a control interface as possible. All true. All far, far better efforts than the competition.

Still, there is room to run, I feel, when it comes to Apple adjudicating what should be considered a societal norm when it comes to the use of personal data on its platform. If it’s going to be the absolute arbiter of what flies on the world’s most profitable application marketplace, it might as well use that power to get a little more feisty with the bigcos (and littlecos) that make their living on our data.

I mention the issues Apple has had above not as a dig, though some might be inclined to view Apple integrating privacy with marketing as boldness bordering on hubris. I, personally, think there’s still a major difference between a company that has situational loss of privacy while having a systemic dedication to privacy and, well, most of the rest of the ecosystem which exists because they operate an “invasion of privacy as a service” business.

Basically, I think stating privacy is your mission is still supportable, even if you have bugs. But attempting to ignore that you host the data platforms that thrive on it is a tasty bit of prestidigitation.

But that might be a little too verbose as a tagline.


Source: The Tech Crunch

Read More

Former Dropbox exec Dennis Woodside joins Impossible Foods as its first President

Posted by on Mar 14, 2019 in California, Chief Operating Officer, cloud storage, Companies, computing, dennis woodside, Dropbox, executive, Food, food and drink, Google, Impossible foods, manufacturing, meat substitutes, Motorola Mobility, president, Redwood City, Singapore, supply chain, TC, United States | 0 comments

Former Google and Dropbox executive Dennis Woodside has joined the meat replacement developer Impossible Foods as the company’s first President.

Woodside, who previously shepherded Dropbox through its initial public offering, is a longtime technology executive who is making his first foray into the food business.

The 25-year tech industry veteran most recently served as the chief operating officer of Dropbox, and previously was the chief executive of Motorola Mobility after that company’s acquisition by Google.

“I love what Impossible Foods is doing: using science and technology to deliver delicious and nutritious foods that people love, in an environmentally sustainable way,” Woodside said. “I’m equally thrilled to focus on providing the award-winning Impossible Burger and future products to millions of consumers, restaurants and retailers.”

According to a statement, Woodside will be responsible for the company’s operations, manufacturing, supply chain, sales, marketing, human resources and other functions.

The company currently has a staff of 350 divided between its Redwood City, Calif. and Oakland manufacturing plant.

Impossible Foods now slings its burger in restaurants across the United States, Hong Kong, Macau and Singapore and is expecting to launch a grocery store product later this year.


Source: The Tech Crunch

Read More