Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Teams autonomously mapping the depths take home millions in Ocean Discovery Xprize

Posted by on May 31, 2019 in Artificial Intelligence, conservation, Gadgets, Hardware, Robotics, Science, TC, XPRIZE | 0 comments

There’s a whole lot of ocean on this planet, and we don’t have much of an idea what’s at the bottom of most of it. That could change with the craft and techniques created during the Ocean Discovery Xprize, which had teams competing to map the sea floor quickly, precisely and autonomously. The winner just took home $4 million.

A map of the ocean would be valuable in and of itself, of course, but any technology used to do so could be applied in many other ways, and who knows what potential biological or medical discoveries hide in some nook or cranny a few thousand fathoms below the surface?

The prize, sponsored by Shell, started back in 2015. The goal was, ultimately, to create a system that could map hundreds of square kilometers of the sea floor at a five-meter resolution in less than a day — oh, and everything has to fit in a shipping container. For reference, existing methods do nothing like this, and are tremendously costly.

But as is usually the case with this type of competition, the difficulty did not discourage the competitors — it only spurred them on. Since 2015, then, the teams have been working on their systems and traveling all over the world to test them.

Originally the teams were to test in Puerto Rico, but after the devastating hurricane season of 2017, the whole operation was moved to the Greek coast. Ultimately after the finalists were selected, they deployed their craft in the waters off Kalamata and told them to get mapping.

Team GEBCO’s surface vehicle

“It was a very arduous and audacious challenge,” said Jyotika Virmani, who led the program. “The test itself was 24 hours, so they had to stay up, then immediately following that was 48 hours of data processing after which they had to give us the data. It takes more trad companies about 2 weeks or so to process data for a map once they have the raw data — we’re pushing for real time.”

This wasn’t a test in a lab bath or pool. This was the ocean, and the ocean is a dangerous place. But amazingly there were no disasters.

“Nothing was damaged, nothing imploded,” she said. “We ran into weather issues, of course. And we did lose one piece of technology that was subsequently found by a Greek fisherman a few days later… but that’s another story.”

At the start of the competition, Virmani said, there was feedback from the entrants that the autonomous piece of the task was simply not going to be possible. But the last few years have proven it to be so, given that the winning team not only met but exceeded the requirements of the task.

“The winning team mapped more than 250 square kilometers in 24 hours, at the minimum of five meters resolution, but around 140 was more than five meters,” Virmani told me. “It was all unmanned: An unmanned surface vehicle that took the submersible out, then recovered it at sea, unmanned again, and brought it back to port. They had such great control over it — they were able to change its path and its programming throughout that 24 hours as they needed to.” (It should be noted that unmanned does not necessarily mean totally hands-off — the teams were permitted a certain amount of agency in adjusting or fixing the craft’s software or route.)

A five-meter resolution, if you can’t quite picture it, would produce a map of a city that showed buildings and streets clearly, but is too coarse to catch, say, cars or street signs. When you’re trying to map two-thirds of the globe, though, this resolution is more than enough — and infinitely better than the nothing we currently have. (Unsurprisingly, it’s also certainly enough for an oil company like Shell to prospect new deep-sea resources.)

The winning team was GEBCO, composed of veteran hydrographers — ocean mapping experts, you know. In addition to the highly successful unmanned craft (Sea-Kit, already cruising the English Channel for other purposes), the team did a lot of work on the data-processing side, creating a cloud-based solution that helped them turn the maps around quickly. (That may also prove to be a marketable service in the future.) They were awarded $4 million, in addition to their cash for being selected as a finalist.

The runner up was Kuroshio, which had great resolution but was unable to map the full 250 km2 due to weather problems. They snagged a million.

A bonus prize for having the submersible track a chemical signal to its source didn’t exactly have a winner, but the teams’ entries were so impressive that the judges decided to split the million between the Tampa Deep Sea Xplorers and Ocean Quest, which amazingly enough is made up mostly of middle-schoolers. The latter gets $800,000, which should help pay for a few new tools in the shop there.

Lastly, a $200,000 innovation prize was given to Team Tao out of the U.K., which had a very different style to its submersible that impressed the judges. While most of the competitors opted for a craft that went “lawnmower-style” above the sea floor at a given depth, Tao’s craft dropped down like a plumb bob, pinging the depths as it went down and back up before moving to a new spot. This provides a lot of other opportunities for important oceanographic testing, Virmani noted.

Having concluded the prize, the organization has just a couple more tricks up its sleeve. GEBCO, which stands for General Bathymetric Chart of the Oceans, is partnering with The Nippon Foundation on Seabed 2030, an effort to map the entire sea floor over the next decade and provide that data to the world for free.

And the program is also — why not? — releasing an anthology of short sci-fi stories inspired by the idea of mapping the ocean. “A lot of our current technology is from the science fiction of the past,” said Virmani. “So we told the authors, imagine we now have a high-resolution map of the sea floor, what are the next steps in ocean tech and where do we go?” The resulting 19 stories, written from all 7 continents (yes, one from Antarctica), will be available June 7.


Source: The Tech Crunch

Read More

Facial recognition startup Kairos settles lawsuits with founder and former CEO Brian Brackeen

Posted by on May 31, 2019 in Artificial Intelligence, brian brackeen, kairos, Lawsuit, TC | 0 comments

Facial recognition startup Kairos, founded by Brian Brackeen, has settled its lawsuit with Brackeen following his ouster from the company late last year. In addition to forcing him out of the company he founded, Kairos sued Brackeen, alleging the misappropriation of corporate funds and misleading shareholders. In response, Brackeen countersued Kairos, alleging the company and its CEO Melissa Doval intentionally destroyed his reputation through fraudulent conduct.

Now, both Kairos and Brackeen are ready to put this all behind them. Both parties have dropped their respective lawsuits and reached a settlement, which entails continuing to recognize Brackeen as the founder of Kairos.

“We are pleased to be putting this episode behind us, and the opportunity to keep the business focused on growth,” Doval said in a press release. “We thank Mr. Brackeen for working towards a resolution, and wish him the best for his future endeavors.”

Brackeen tells TechCrunch he’s excited about the settlement and can now move on to become an investor at Lightship Capital, a new fund where he serves as managing partner. The fund is geared toward supporting underrepresented founders and does not require board seats to invest.

“I have become the investor I didn’t have enough of…founder focused, principled, and growth minded,” Brackeen said in an email to TechCrunch. “Our firm puts founder support at the front of our thinking because we know what happens to shareholder value when you don’t. That’s the blessing that’s come from this chapter in my life. On to the next!”


Source: The Tech Crunch

Read More

Reality Check: The marvel of computer vision technology in today’s camera-based AR systems

Posted by on May 15, 2019 in Animation, AR, ar/vr, Artificial Intelligence, Augmented Reality, Column, Computer Vision, computing, Developer, digital media, Gaming, gif, Global Positioning System, gps, mobile phones, neural network, starbucks, TC, virtual reality, VR | 0 comments

British science fiction writer, Sir Arther C. Clark, once said, “Any sufficiently advanced technology is indistinguishable from magic.”

Augmented reality has the potential to instill awe and wonder in us just as magic would. For the very first time in the history of computing, we now have the ability to blur the line between the physical world and the virtual world. AR promises to bring forth the dawn of a new creative economy, where digital media can be brought to life and given the ability to interact with the real world.

AR experiences can seem magical but what exactly is happening behind the curtain? To answer this, we must look at the three basic foundations of a camera-based AR system like our smartphone.

  1. How do computers know where it is in the world? (Localization + Mapping)
  2. How do computers understand what the world looks like? (Geometry)
  3. How do computers understand the world as we do? (Semantics)

Part 1: How do computers know where it is in the world? (Localization)

Mars Rover Curiosity taking a selfie on Mars. Source: https://www.nasa.gov/jpl/msl/pia19808/looking-up-at-mars-rover-curiosity-in-buckskin-selfie/

When NASA scientists put the rover onto Mars, they needed a way for the robot to navigate itself on a different planet without the use of a global positioning system (GPS). They came up with a technique called Visual Inertial Odometry (VIO) to track the rover’s movement over time without GPS. This is the same technique that our smartphones use to track their spatial position and orientation.

A VIO system is made out of two parts.


Source: The Tech Crunch

Read More

Google’s Translatotron converts one spoken language to another, no text involved

Posted by on May 15, 2019 in Artificial Intelligence, Google, machine learning, machine translation, Science, Translation | 0 comments

Every day we creep a little closer to Douglas Adams’ famous and prescient Babel fish. A new research project from Google takes spoken sentences in one language and outputs spoken words in another — but unlike most translation techniques, it uses no intermediate text, working solely with the audio. This makes it quick, but more importantly lets it more easily reflect the cadence and tone of the speaker’s voice.

Translatotron, as the project is called, is the culmination of several years of related work, though it’s still very much an experiment. Google’s researchers, and others, have been looking into the possibility of direct speech-to-speech translation for years, but only recently have those efforts borne fruit worth harvesting.

Translating speech is usually done by breaking down the problem into smaller sequential ones: turning the source speech into text (speech-to-text, or STT), turning text in one language into text in another (machine translation), and then turning the resulting text back into speech (text-to-speech, or TTS). This works quite well, really, but it isn’t perfect; each step has types of errors it is prone to, and these can compound one another.

Furthermore, it’s not really how multilingual people translate in their own heads, as testimony about their own thought processes suggests. How exactly it works is impossible to say with certainty, but few would say that they break down the text and visualize it changing to a new language, then read the new text. Human cognition is frequently a guide for how to advance machine learning algorithms.

Spectrograms of source and translated speech. The translation, let us admit, is not the best. But it sounds better!

To that end, researchers began looking into converting spectrograms, detailed frequency breakdowns of audio, of speech in one language directly to spectrograms in another. This is a very different process from the three-step one, and has its own weaknesses, but it also has advantages.

One is that, while complex, it is essentially a single-step process rather than multi-step, which means, assuming you have enough processing power, Translatotron could work quicker. But more importantly for many, the process makes it easy to retain the character of the source voice, so the translation doesn’t come out robotically, but with the tone and cadence of the original sentence.

Naturally this has a huge impact on expression, and someone who relies on translation or voice synthesis regularly will appreciate that not only what they say comes through, but how they say it. It’s hard to overstate how important this is for regular users of synthetic speech.

The accuracy of the translation, the researchers admit, is not as good as the traditional systems, which have had more time to hone their accuracy. But many of the resulting translations are (at least partially) quite good, and being able to include expression is too great an advantage to pass up. In the end, the team modestly describes their work as a starting point demonstrating the feasibility of the approach, though it’s easy to see that it is also a major step forward in an important domain.

The paper describing the new technique was published on Arxiv, and you can browse samples of speech, from source to traditional translation to Translatotron, at this page. Just be aware that these are not all selected for the quality of their translation, but serve more as examples of how the system retains expression while getting the gist of the meaning.


Source: The Tech Crunch

Read More

Microsoft open-sources a crucial algorithm behind its Bing Search services

Posted by on May 15, 2019 in Artificial Intelligence, Bing, Cloud, computing, Developer, Microsoft, open source software, search results, Software, windows phone, world wide web | 0 comments

Microsoft today announced that it has open-sourced a key piece of what makes its Bing search services able to quickly return search results to its users. By making this technology open, the company hopes that developers will be able to build similar experiences for their users in other domains where users search through vast data troves, including in retail, though in this age of abundant data, chances are developers will find plenty of other enterprise and consumer use cases, too.

The piece of software the company open-sourced today is a library Microsoft developed to make better use of all the data it collected and AI models it built for Bing .

“Only a few years ago, web search was simple. Users typed a few words and waded through pages of results,” the company notes in today’s announcement. “Today, those same users may instead snap a picture on a phone and drop it into a search box or use an intelligent assistant to ask a question without physically touching a device at all. They may also type a question and expect an actual reply, not a list of pages with likely answers.”

With the Space Partition Tree and Graph (SPTAG) algorithm that is at the core of the open-sourced Python library, Microsoft is able to search through billions of pieces of information in milliseconds.

Vector search itself isn’t a new idea, of course. What Microsoft has done, though, is apply this concept to working with deep learning models. First, the team takes a pre-trained model and encodes that data into vectors, where every vector represents a word or pixel. Using the new SPTAG library, it then generates a vector index. As queries come in, the deep learning model translates that text or image into a vector and the library finds the most related vectors in that index.

“With Bing search, the vectorizing effort has extended to over 150 billion pieces of data indexed by the search engine to bring improvement over traditional keyword matching,” Microsoft says. “These include single words, characters, web page snippets, full queries and other media. Once a user searches, Bing can scan the indexed vectors and deliver the best match.”

The library is now available under the MIT license and provides all of the tools to build and search these distributed vector indexes. You can find more details about how to get started with using this library — as well as application samples — here.


Source: The Tech Crunch

Read More

Madrona Venture Labs raises $11M to build companies from the ground up

Posted by on May 15, 2019 in alpha, Amazon, Artificial Intelligence, blake irving, eBay, economy, entrepreneurship, erik blachford, Facebook, Finance, GoDaddy, madrona venture group, Microsoft, money, Private Equity, Seattle, spencer rascoff, Startup company, TC, Trinity Ventures, Venture Capital, venture capital Firms, venture capital funds, Zillow | 0 comments

In regions where would-be entrepreneurs need a little more support and encouragement before they’ll quit their day job, the startup studio model is taking off.

In Seattle, Madrona Venture Labs (MVL), a studio founded within one the city’s oldest and most-celebrated venture capital firms, Madrona Venture Group, has raised $11.3 million. The investment brings the studio’s total funding to $20 million.

Traditional venture capital funds invite founders to pitch their business idea to a line-up of partners. Sometimes that’s a founder with an idea looking for seed capital, other times it’s a more mature company looking to scale. When it comes to startup studios, the partners themselves craft startup ideas internally, recruiting entrepreneurs to lead the projects, then building them from the ground up within their own safe, protective walls. After a project passes the studio’s litmus test, i.e. shows proof of traction, product-market fit and more, it’s spun out with funding from Madrona and other VCs within its large and growing investor network.

For aspiring entrepreneurs deterred by the risk factors inherent to building venture-backed startups, it’s a highly desirable route. In the Pacific Northwest, where MVL focuses its efforts, it’s a chance to lure Microsoft and Amazon employees into the world of entrepreneurship.

“We want to be an onboard for founders in our market,” MVL managing director Mike Fridgen, who previously led the eBay-acquired business Decide.com, tells TechCrunch. “In Seattle, everyone isn’t a co-founder or an angel investor. Not everyone has been at a startup. A lot of people coming here are coming to work at Amazon, Microsoft or one of the larger satellite offices like Facebook. We want to help them fast-track learning, fundraising and everything else that comes with launching a successful company.”

Fridgen, MVL managing director Ben Elowitz, who co-founded the online jewelry marketplace Blue Nile and chief technology officer Jay Bartot, the co-founder of Hulu-acquired Vhoto, lead Madrona’s studio effort.

The investment in MVL comes in part from its parent company, Madrona, and for the first time, outside investors have acquired stakes in the practice. Alpha Edison, West River Group, Founder’s Co-op partner Rudy Gadre, Zillow co-founder Spencer Rascoff, former GoDaddy CEO Blake Irving, Trinity Ventures venture partner Gus Tai, TCV venture partner Erik Blachford and others participated.

With $1.6 billion in assets under management, Madrona is known for investments in Seattle bigwigs like Smartsheet, Rover and Redfin. The firm, which recently closed on another $100 million for an acceleration fund that will expand its geographic reach beyond the Pacific Northwest, launched its startup studio in 2014. Since then, it’s spun-out seven companies with an aggregate valuation of $140 million.

“There are some 85 VCs that have $300 million-plus funds,” Fridgen said. “In Seattle, we have two of the most valuable companies in the world and we have just one [big fund], Madrona; it’s the center of gravity for Seattle technology innovation.”

Companies created within MVL include Spruce Up, an AI-powered personal shopping platform, and Domicile, a luxury apartment rental service geared toward business travelers. Domicile was co-founded by Ross Saario, who spent the three years ahead of launching the startup as a general manager at Amazon. The company recently raised a $5 million round, while Spruce Up, co-founded by serial founder Mia Lewin, closed a $3 million round in May.

Other spin-outs include MightyAI, which was valued at $71 million in 2017; Nordstrom-acquired MessageYes, Chatitive and Rep the Squad. The latter, a jersey rental business, was a failure, shutting down in 2018 after failing to land necessary investment, according to GeekWire.

MVL’s latest fundraise will be used to invest in operations. Though MVL does provide its spin-outs with some capital, between $100,000 to $200,000 Fridgen said, it takes a back seat when it comes time to raise outside capital and doesn’t serve as the lead investor in deals.


Source: The Tech Crunch

Read More

How A.I. Can Help Handle Severe Weather

Posted by on May 13, 2019 in Artificial Intelligence, Computers and the Internet, Energy Efficiency, Forests and Forestry, Genetic Engineering, Global Warming, Hurricane Harvey (2017), Hurricane Maria (2017), Hurricanes and Tropical Storms, Weather | 0 comments

Researchers from industry, academia and government agencies are finding new ways to repair the problems of hurricanes, flooding, drought and wildfires.
Source: New York Times

Read More

Cisco open sources MindMeld conversational AI platform

Posted by on May 9, 2019 in Artificial Intelligence, Cisco, Developer, Enterprise, MindMeld, open source, TC, voice recognition | 0 comments

Cisco announced today that it was open-sourcing the MindMeld conversation AI platform, making it available to anyone who wants to use it under the Apache 2.0 license.

MindMeld is the conversational AI company that Cisco bought in 2017. The company put the technology to use in Cisco Spark Assistant later that year to help bring voice commands to meeting hardware, which was just beginning to emerge at the time.

Today, there is a concerted effort to bring voice to enterprise use cases, and Cisco is offering the means for developers to do that with the MindMeld tool set. “Today, Cisco is taking a big step towards empowering developers with more comprehensive and practical tools for building conversational applications by open-sourcing the MindMeld Conversational AI Platform,” Cisco’s head of machine learning Karthik Raghunathan wrote in a blog post.

The company also wants to make it easier for developers to get going with the platform, so it is releasing the Conversational AI Playbook, a step-by-step guide book to help developers get started with conversation-driven applications. Cisco says this is about empowering developers, and that’s probably a big part of the reason.

But it would also be in Cisco’s best interest to have developers outside of Cisco working with and on this set of tools. By open-sourcing them, the hope is that a community of developers, whether Cisco customers or others, will begin using, testing and improving the tools; helping it to develop the platform faster and more broadly than it could, even inside an organization as large as Cisco.

Of course, just because they offer it doesn’t necessarily automatically mean the community of interested developers will emerge, but given the growing popularity of voice-enabled used cases, chances are some will give it a look. It will be up to Cisco to keep them engaged.

Cisco is making all of this available on its own DevNet platform starting today.


Source: The Tech Crunch

Read More

Scientists pull speech directly from the brain

Posted by on Apr 24, 2019 in Artificial Intelligence, Biotech, brain-computer interface, Health, Science, synthetic speech, TC, UCSF | 0 comments

In a feat that could eventually unlock the possibility of speech for people with severe medical conditions, scientists have successfully recreated the speech of healthy subjects by tapping directly into their brains. The technology is a long, long way from practical application but the science is real and the promise is there.

Edward Chang, neurosurgeon at UC San Francisco and co-author of the paper published today in Nature, explained the impact of the team’s work in a press release: “For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity. This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.”

To be perfectly clear, this isn’t some magic machine that you sit in and its translates your thoughts into speech. It’s a complex and invasive process that decodes not exactly what the subject is thinking but what they were actually speaking.

Led by speech scientist Gopala Anumanchipalli, the experiment involved subjects who had already had large electrode arrays implanted in their brains for a different medical procedure. The researchers had these lucky people read out several hundred sentences aloud while closely recording the signals detected by the electrodes.

The electrode array in question.

See, it happens that the researchers know a certain pattern of brain activity that comes after you think of and arrange words (in cortical areas like Wernicke’s and Broca’s) and before the final signals are sent from the motor cortex to your tongue and mouth muscles. There’s a sort of intermediate signal between those that Anumanchipalli and his co-author, grad student Josh Chartier, previously characterized, and which they thought may work for the purposes of reconstructing speech.

Analyzing the audio directly let the team determine what muscles and movements would be involved when (this is pretty established science), and from this they built a sort of virtual model of the person’s vocal system.

They then mapped the brain activity detected during the session to that virtual model using a machine learning system, essentially allowing a recording of a brain to control a recording of a mouth. It’s important to understand that this isn’t turning abstract thoughts into words — it’s understanding the brain’s concrete instructions to the muscles of the face, and determining from those what words those movements would be forming. It’s brain reading, but it isn’t mind reading.

The resulting synthetic speech, while not exactly crystal clear, is certainly intelligible. And set up correctly, it could be capable of outputting 150 words per minute from a person who may otherwise be incapable of speech.

“We still have a ways to go to perfectly mimic spoken language,” said Chartier. “Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what’s currently available.”

For comparison, a person so afflicted, for instance with a degenerative muscular disease, often has to speak by spelling out words one letter at a time with their gaze. Picture 5-10 words per minute, with other methods for more disabled individuals going even slower. It’s a miracle in a way that they can communicate at all, but this time-consuming and less than natural method is a far cry from the speed and expressiveness of real speech.

If a person was able to use this method, they would be far closer to ordinary speech, though perhaps at the cost of perfect accuracy. But it’s not a magic bullet.

The problem with this method is that it requires a great deal of carefully collected data from what amounts to a healthy speech system, from brain to tip of the tongue. For many people it’s no longer possible to collect this data, and for others the invasive method of collection will make it impossible for a doctor to recommend. And conditions that have prevented a person from ever talking prevent this method from working as well.

The good news is that it’s a start, and there are plenty of conditions it would work for, theoretically. And collecting that critical brain and speech recording data could be done preemptively in cases where a stroke or degeneration is considered a risk.


Source: The Tech Crunch

Read More

VDOO secures $32M for a platform that uses AI to detect and fix vulnerabilities on IoT devices

Posted by on Apr 24, 2019 in Artificial Intelligence, Enterprise, IoT, Security | 0 comments

Our universe of connected things is expanding by the day: the number of objects with embedded processors now exceeds the number of smartphones globally and is projected to reach some 18 billion devices by 2022. But just as that number is growing, so are the opportunities for malicious hackers to use these embedded devices to crack into networks, disrupting how these objects work and stealing information, a problem that analysts estimate will cost $18.3 billion to address by 2023. Now, an Israeli startup called VDOO has raised $32 million to address this, with a platform that identifies and fixes security vulnerabilities in IoT devices, and then tests to make sure that the fixes work.

The funding is being led by WRVI Capital and GGV Capital and also includes strategic investments from NTT DOCOMO (which works with VDOO), MS&AD Ventures (the venture arm of the global cyber insurance firm), and Avigdor Willenz (who founded both Galileo Technologies and Annapurna Labs, respectively acquired by Marvell and Amazon). 83North, Dell Technology Capital and David Strohm, who backed VDOO in its previous round of $13 million in January 2018, also participated, bringing the total raised by VDOO now to $45 million.

VDOO — a reference to the Hebrew word that sounds like “vee-doo” and means “making sure” — was cofounded by Netanel Davidi (co-CEO), Uri Alter (also co-CEO) and Asaf Karas (CTO). Davidi and Alter previously co-founded Cyvera, a pioneer in endpoint security that was acquired by Palo Alto Networks and became the basis for its own endpoint security product; Karas meanwhile has extensive experience coming to VDOO of working, among other places, for the Israeli Defense Forces.

In an interview, Davidi noted that the company was created out of one of the biggest shortfalls of IoT.

“Many embedded systems have a low threshold for security because they were not created with security in mind,” he said, noting that this is partly due to concerns of how typical security fixes might impact performance, and the fact that this has typically not been a core competency for hardware makers, but something that is considered after devices are in the market. At the same time, a lot of security solutions today in the IoT space have focused on monitoring, but not fixing, he added. “Most companies have good solutions for the visibility of their systems, and are able to identify vulnerabilities on the network, but are not sufficient at protecting devices themselves.”

The sheer number of devices on the market and their spread across a range of deployments from manufacturing and other industrial scenarios, through to in-home systems that can be vulnerable even when not connected to the internet, also makes for a complicated and uneven landscape.

VDOO’s approach was to conceive of a very lightweight implementation that sits on a small group of devices — “small” is relative here: the set was 16,000 objects — applying machine learning to “learn” how different security vulnerabilities might behave to discover adjacent hacks that hadn’t yet been identified.

“For any kind of vulnerability, using deep binary analysis capabilities, we try to understand the broader idea, to figure out how a similar vulnerability can emerge,” he said.

Part of the approach is to pare down security requirements and solutions to those pertinent to the device in question, and providing clear guidance to vendors for how to best avoid problems in the first place at the development stage. VDOO then also generates specific “tailor-made on-device micro-agents” to continue the detection and repair process. (Davidi likened it to a modern approach to some cancer care: preventive measures such as periodic monitoring checks; followed by a “tailored immunotherapy” based on prior analysis of DNA.)

It currently supports Linux- and Android-based operating systems, as well as FreeRTOS and support for more systems coming soon, Davidi said. It sells its services primarily to device makers, who can make over the air updates to their devices after they have been purchased and implemented to keep them up to date with the latest fixes. Typical devices currently secured with VDOO tech include safety and security devices such as surveillance cameras, NVRs & DVRs, fire alarm systems, access controls, routers, switches and access points, Davidi said.

It’s the focus on providing security services for hardware makers, in fact, that helps VDOO stand out from the others in the field.

“Among all startups for embedded systems, VDOO is the first to introduce a unique, holistic approach focusing on the device vendors which are the focal enabler in truly securing devices,” said Lip-Bu Tan, founding partner of WRVI Capital. “We are delighted to back VDOO’s technology, and the exceptional team that has created advanced tools to allow vendors to secure devices as much as possible without in-house security know-how, for the first time in many decades, I see a clear demand for security, as being raised constantly in many meetings with leading OEMs worldwide, as well as software giants.”

Over the last 18 months, as VDOO has continued to expand its own reach, it has picked up customers along the way after identifying vulnerabilities in their devices. Its dataset covers some 70 million embedded systems’ binaries and more than 16,000 versions of embedded systems, and it has worked with customers to identify and address 150 zero-day vulnerabilities and 100,000 security issues that would have potentially impacted 1.5 billion devices.

Interestingly, while VDOO is building its own IP, it is also working with a number of vendors to provide many of the fixes. Davidi says that VDOO and those vendors go through fairly rigorous screening processes before integrating, and the hope is that down the line there will more automation brought in for the “fixing” element using third-party solutions.

“VDOO brings a unique end-to-end security platform, answering the global connectivity trend and the emerging threats targeting embedded devices, to provide security as an essential enabler of extensive connected devices adoption. With its differentiated capabilities, VDOO has succeeded in acquiring global customers, including many top-tier brands. Moreover, VDOO’s ability to uncover and mitigate weaknesses created by external suppliers fits perfectly into our Supply Chain Security investment strategy,” said Glenn Solomon, managing partner at GGV Capital, in a statement. “This funding, together with the company’s great technology, skilled entrepreneurs and one of the best teams we have seen, will allow VDOO to maintain its leadership position in IoT security and expand geographies while continuing to develop its state-of-the-art technology.”

Valuation is currently not being disclosed.


Source: The Tech Crunch

Read More