Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Teams autonomously mapping the depths take home millions in Ocean Discovery Xprize

Posted by on May 31, 2019 in Artificial Intelligence, conservation, Gadgets, Hardware, Robotics, Science, TC, XPRIZE | 0 comments

There’s a whole lot of ocean on this planet, and we don’t have much of an idea what’s at the bottom of most of it. That could change with the craft and techniques created during the Ocean Discovery Xprize, which had teams competing to map the sea floor quickly, precisely and autonomously. The winner just took home $4 million.

A map of the ocean would be valuable in and of itself, of course, but any technology used to do so could be applied in many other ways, and who knows what potential biological or medical discoveries hide in some nook or cranny a few thousand fathoms below the surface?

The prize, sponsored by Shell, started back in 2015. The goal was, ultimately, to create a system that could map hundreds of square kilometers of the sea floor at a five-meter resolution in less than a day — oh, and everything has to fit in a shipping container. For reference, existing methods do nothing like this, and are tremendously costly.

But as is usually the case with this type of competition, the difficulty did not discourage the competitors — it only spurred them on. Since 2015, then, the teams have been working on their systems and traveling all over the world to test them.

Originally the teams were to test in Puerto Rico, but after the devastating hurricane season of 2017, the whole operation was moved to the Greek coast. Ultimately after the finalists were selected, they deployed their craft in the waters off Kalamata and told them to get mapping.

Team GEBCO’s surface vehicle

“It was a very arduous and audacious challenge,” said Jyotika Virmani, who led the program. “The test itself was 24 hours, so they had to stay up, then immediately following that was 48 hours of data processing after which they had to give us the data. It takes more trad companies about 2 weeks or so to process data for a map once they have the raw data — we’re pushing for real time.”

This wasn’t a test in a lab bath or pool. This was the ocean, and the ocean is a dangerous place. But amazingly there were no disasters.

“Nothing was damaged, nothing imploded,” she said. “We ran into weather issues, of course. And we did lose one piece of technology that was subsequently found by a Greek fisherman a few days later… but that’s another story.”

At the start of the competition, Virmani said, there was feedback from the entrants that the autonomous piece of the task was simply not going to be possible. But the last few years have proven it to be so, given that the winning team not only met but exceeded the requirements of the task.

“The winning team mapped more than 250 square kilometers in 24 hours, at the minimum of five meters resolution, but around 140 was more than five meters,” Virmani told me. “It was all unmanned: An unmanned surface vehicle that took the submersible out, then recovered it at sea, unmanned again, and brought it back to port. They had such great control over it — they were able to change its path and its programming throughout that 24 hours as they needed to.” (It should be noted that unmanned does not necessarily mean totally hands-off — the teams were permitted a certain amount of agency in adjusting or fixing the craft’s software or route.)

A five-meter resolution, if you can’t quite picture it, would produce a map of a city that showed buildings and streets clearly, but is too coarse to catch, say, cars or street signs. When you’re trying to map two-thirds of the globe, though, this resolution is more than enough — and infinitely better than the nothing we currently have. (Unsurprisingly, it’s also certainly enough for an oil company like Shell to prospect new deep-sea resources.)

The winning team was GEBCO, composed of veteran hydrographers — ocean mapping experts, you know. In addition to the highly successful unmanned craft (Sea-Kit, already cruising the English Channel for other purposes), the team did a lot of work on the data-processing side, creating a cloud-based solution that helped them turn the maps around quickly. (That may also prove to be a marketable service in the future.) They were awarded $4 million, in addition to their cash for being selected as a finalist.

The runner up was Kuroshio, which had great resolution but was unable to map the full 250 km2 due to weather problems. They snagged a million.

A bonus prize for having the submersible track a chemical signal to its source didn’t exactly have a winner, but the teams’ entries were so impressive that the judges decided to split the million between the Tampa Deep Sea Xplorers and Ocean Quest, which amazingly enough is made up mostly of middle-schoolers. The latter gets $800,000, which should help pay for a few new tools in the shop there.

Lastly, a $200,000 innovation prize was given to Team Tao out of the U.K., which had a very different style to its submersible that impressed the judges. While most of the competitors opted for a craft that went “lawnmower-style” above the sea floor at a given depth, Tao’s craft dropped down like a plumb bob, pinging the depths as it went down and back up before moving to a new spot. This provides a lot of other opportunities for important oceanographic testing, Virmani noted.

Having concluded the prize, the organization has just a couple more tricks up its sleeve. GEBCO, which stands for General Bathymetric Chart of the Oceans, is partnering with The Nippon Foundation on Seabed 2030, an effort to map the entire sea floor over the next decade and provide that data to the world for free.

And the program is also — why not? — releasing an anthology of short sci-fi stories inspired by the idea of mapping the ocean. “A lot of our current technology is from the science fiction of the past,” said Virmani. “So we told the authors, imagine we now have a high-resolution map of the sea floor, what are the next steps in ocean tech and where do we go?” The resulting 19 stories, written from all 7 continents (yes, one from Antarctica), will be available June 7.


Source: The Tech Crunch

Read More

Groupon co-founder Eric Lefkofsky just raised another $200 million for his newest company, Tempus

Posted by on May 31, 2019 in Baillie Gifford, Biotech, drug development, eric lefkofsky, Groupon, Recent Funding, Revolution Growth, Science, Startups, TC, Tempus | 0 comments

When serial entrepreneur Eric Lefkofsky grows a company, he puts the pedal to the metal. When in 2011 his last company, the Chicago-based coupons site Groupon, raised $950 million from investors, it was the largest amount raised by a startup ever. It was just over three years old at the time, and it went public later that same year.

Lefkofsky seems to be stealing a page from the same playbook for his newest company, Tempus. The Chicago-based genomic testing and data analysis company was founded a little more than three years ago, yet it has already hired nearly 700 employees and raised more than $500 million — including through a new $200 million round that values the company at $3.1 billion.

According to the Chicago Tribune, that new valuation makes it — as Groupon once was — one of Chicago’s most highly valued privately held companies.

So why all the fuss? As the Tribune explains it, Tempus has built a platform to collect, structure and analyze the clinical data that’s often unorganized in electronic medical record systems. The company also generates genomic data by sequencing patient DNA and other information in its lab.

The goal is to help doctors create customized treatments for each individual patient, Lefkofsky tells the paper.

So far, it has partnered with numerous cancer treatment centers that are apparently giving Tempus human data from which to learn. Tempus is also seemingly generating data “in vitro,” as is another company we featured recently called Insitro, a drug development startup founded by famed AI researcher Daphne Koller. With Insitro, it is working on a liver disease treatment owing to a tie-up with Gilead, which has amassed related human data over the years from which Insitro can use to learn. As a complementary data source, Insitro is trying to learn what the disease does in a “dish,” then determine if it can use what it observes using machine learning to predict what it sees in people.

While’s Tempus genomic testing is centered on cancers for now, Lefkofsky already says that Tempus wants to expand into diabetes and depression, too.

In the meantime, he tells Crain’s Chicago Business that Tempus is already generating “significant” revenue. “Our oldest partners, have, in most cases, now expanded to different subgroups (of cancer). What we’re doing is working.”

Investors in the latest round include Baillie Gifford; Revolution Growth; New Enterprise Associates; funds and accounts managed by T. Rowe Price; Novo Holdings; and the investment management company Franklin Templeton.


Source: The Tech Crunch

Read More

Google’s Translatotron converts one spoken language to another, no text involved

Posted by on May 15, 2019 in Artificial Intelligence, Google, machine learning, machine translation, Science, Translation | 0 comments

Every day we creep a little closer to Douglas Adams’ famous and prescient Babel fish. A new research project from Google takes spoken sentences in one language and outputs spoken words in another — but unlike most translation techniques, it uses no intermediate text, working solely with the audio. This makes it quick, but more importantly lets it more easily reflect the cadence and tone of the speaker’s voice.

Translatotron, as the project is called, is the culmination of several years of related work, though it’s still very much an experiment. Google’s researchers, and others, have been looking into the possibility of direct speech-to-speech translation for years, but only recently have those efforts borne fruit worth harvesting.

Translating speech is usually done by breaking down the problem into smaller sequential ones: turning the source speech into text (speech-to-text, or STT), turning text in one language into text in another (machine translation), and then turning the resulting text back into speech (text-to-speech, or TTS). This works quite well, really, but it isn’t perfect; each step has types of errors it is prone to, and these can compound one another.

Furthermore, it’s not really how multilingual people translate in their own heads, as testimony about their own thought processes suggests. How exactly it works is impossible to say with certainty, but few would say that they break down the text and visualize it changing to a new language, then read the new text. Human cognition is frequently a guide for how to advance machine learning algorithms.

Spectrograms of source and translated speech. The translation, let us admit, is not the best. But it sounds better!

To that end, researchers began looking into converting spectrograms, detailed frequency breakdowns of audio, of speech in one language directly to spectrograms in another. This is a very different process from the three-step one, and has its own weaknesses, but it also has advantages.

One is that, while complex, it is essentially a single-step process rather than multi-step, which means, assuming you have enough processing power, Translatotron could work quicker. But more importantly for many, the process makes it easy to retain the character of the source voice, so the translation doesn’t come out robotically, but with the tone and cadence of the original sentence.

Naturally this has a huge impact on expression, and someone who relies on translation or voice synthesis regularly will appreciate that not only what they say comes through, but how they say it. It’s hard to overstate how important this is for regular users of synthetic speech.

The accuracy of the translation, the researchers admit, is not as good as the traditional systems, which have had more time to hone their accuracy. But many of the resulting translations are (at least partially) quite good, and being able to include expression is too great an advantage to pass up. In the end, the team modestly describes their work as a starting point demonstrating the feasibility of the approach, though it’s easy to see that it is also a major step forward in an important domain.

The paper describing the new technique was published on Arxiv, and you can browse samples of speech, from source to traditional translation to Translatotron, at this page. Just be aware that these are not all selected for the quality of their translation, but serve more as examples of how the system retains expression while getting the gist of the meaning.


Source: The Tech Crunch

Read More

What's So Special About Human Screams? Ask a Screamologist

Posted by on May 15, 2019 in Science, Science / Psychology and Neuroscience | 0 comments

A better understanding of the acoustics of screaming could help scientists understand how and why humans shriek—and add a new dimension to the surveillance state!
Source: Wired

Read More

NASA Needs $1.6 Billion More to Send a Human to the Moon

Posted by on May 15, 2019 in Science, Science / Space | 0 comments

The space agency’s new budget amendment has been called a “down payment” on what will be needed in future years to fund the program.
Source: Wired

Read More

Scientists pull speech directly from the brain

Posted by on Apr 24, 2019 in Artificial Intelligence, Biotech, brain-computer interface, Health, Science, synthetic speech, TC, UCSF | 0 comments

In a feat that could eventually unlock the possibility of speech for people with severe medical conditions, scientists have successfully recreated the speech of healthy subjects by tapping directly into their brains. The technology is a long, long way from practical application but the science is real and the promise is there.

Edward Chang, neurosurgeon at UC San Francisco and co-author of the paper published today in Nature, explained the impact of the team’s work in a press release: “For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity. This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.”

To be perfectly clear, this isn’t some magic machine that you sit in and its translates your thoughts into speech. It’s a complex and invasive process that decodes not exactly what the subject is thinking but what they were actually speaking.

Led by speech scientist Gopala Anumanchipalli, the experiment involved subjects who had already had large electrode arrays implanted in their brains for a different medical procedure. The researchers had these lucky people read out several hundred sentences aloud while closely recording the signals detected by the electrodes.

The electrode array in question.

See, it happens that the researchers know a certain pattern of brain activity that comes after you think of and arrange words (in cortical areas like Wernicke’s and Broca’s) and before the final signals are sent from the motor cortex to your tongue and mouth muscles. There’s a sort of intermediate signal between those that Anumanchipalli and his co-author, grad student Josh Chartier, previously characterized, and which they thought may work for the purposes of reconstructing speech.

Analyzing the audio directly let the team determine what muscles and movements would be involved when (this is pretty established science), and from this they built a sort of virtual model of the person’s vocal system.

They then mapped the brain activity detected during the session to that virtual model using a machine learning system, essentially allowing a recording of a brain to control a recording of a mouth. It’s important to understand that this isn’t turning abstract thoughts into words — it’s understanding the brain’s concrete instructions to the muscles of the face, and determining from those what words those movements would be forming. It’s brain reading, but it isn’t mind reading.

The resulting synthetic speech, while not exactly crystal clear, is certainly intelligible. And set up correctly, it could be capable of outputting 150 words per minute from a person who may otherwise be incapable of speech.

“We still have a ways to go to perfectly mimic spoken language,” said Chartier. “Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what’s currently available.”

For comparison, a person so afflicted, for instance with a degenerative muscular disease, often has to speak by spelling out words one letter at a time with their gaze. Picture 5-10 words per minute, with other methods for more disabled individuals going even slower. It’s a miracle in a way that they can communicate at all, but this time-consuming and less than natural method is a far cry from the speed and expressiveness of real speech.

If a person was able to use this method, they would be far closer to ordinary speech, though perhaps at the cost of perfect accuracy. But it’s not a magic bullet.

The problem with this method is that it requires a great deal of carefully collected data from what amounts to a healthy speech system, from brain to tip of the tongue. For many people it’s no longer possible to collect this data, and for others the invasive method of collection will make it impossible for a doctor to recommend. And conditions that have prevented a person from ever talking prevent this method from working as well.

The good news is that it’s a start, and there are plenty of conditions it would work for, theoretically. And collecting that critical brain and speech recording data could be done preemptively in cases where a stroke or degeneration is considered a risk.


Source: The Tech Crunch

Read More

Watch Rocket Lab’s first launch of 2019 lift a DARPA experiment into orbit

Posted by on Mar 28, 2019 in DARPA, launches, Rocket Lab, rockets, Science, Space | 0 comments

Rocket Lab, the Kiwi operation working on breaking into the launch industry with small but frequent launches, has its first launch of the year today, due to take off in just a few minutes. Tune in below!

The company recently, after the numerous delays endemic to the launch industry, made its first real commercial launches, which spurred a $140 million investment. It is now working on increasing launch cadence and building enough rockets to do so.

Rocket Lab CEO Peter Beck was on stage at Disrupt SF not long ago talking about the new space economy. I thought it was a great discussion. (But then, I was the moderator, so how could it not be?)

The client for today’s launch is DARPA, which has opted to use smaller launch providers for a series of experiments and deployments. Onboard the Electron rocket today is the “RF Risk Reduction Deployment Demonstration, or R3D2. It’s an experimental antenna made of “a tissue-thin Kapton membrane” that will deploy from its small package to a full 7 feet across once in orbit.

The earliest opportunities for the launch were well over a week ago, but in this business, delays are expected. But all the little warning lights are off and the weather is fine, so we should be seeing R3D2 heading skyward in a few minutes.

You can watch the whole thing live below. I’ll update the post if there are any major updates.


Source: The Tech Crunch

Read More

Mars helicopter bound for the Red Planet takes to the air for the first time

Posted by on Mar 28, 2019 in drones, Gadgets, Government, Hardware, jpl, mars 2020, mars helicopter, NASA, Robotics, Science, Space, TC, UAVs | 0 comments

The Mars 2020 mission is on track for launch next year, and nesting inside the high-tech new rover heading that direction is a high-tech helicopter designed to fly in the planet’s nearly non-existent atmosphere. The actual aircraft that will fly on the Martian surface just took its first flight and its engineers are over the moon.

“The next time we fly, we fly on Mars,” said MiMi Aung, who manages the project at JPL, in a news release. An engineering model that was very close to final has over an hour of time in the air, but these two brief test flights were the first and last time the tiny craft will take flight until it does so on the distant planet (not counting its “flight” during launch).

“Watching our helicopter go through its paces in the chamber, I couldn’t help but think about the historic vehicles that have been in there in the past,” she continued. “The chamber hosted missions from the Ranger Moon probes to the Voyagers to Cassini, and every Mars rover ever flown. To see our helicopter in there reminded me we are on our way to making a little chunk of space history as well.”

Artist’s impression of how the helicopter will look when it’s flying on Mars.

A helicopter flying on Mars is much like a helicopter flying on Earth, except of course for the slight differences that the other planet has a third less gravity and 99 percent less air. It’s more like flying at 100,000 feet, Aung suggested.

It has its own solar panel so it can explore more or less on its own.

The test rig they set up not only produces a near-vacuum, replacing the air with a thin, Mars-esque CO2 mix, but a “gravity offload” system simulates lower gravity by giving the helicopter a slight lift via a cable.

It flew at a whopping 2 inches of altitude for a total of a minute in two tests, which was enough to show the team that the craft (with all its 1,500 parts and four pounds) was ready to package up and send to the Red Planet.

“It was a heck of a first flight,” said tester Teddy Tzanetos. “The gravity offload system performed perfectly, just like our helicopter. We only required a 2-inch hover to obtain all the data sets needed to confirm that our Mars helicopter flies autonomously as designed in a thin Mars-like atmosphere; there was no need to go higher.”

A few months the Mars 2020 rover has landed, the helicopter will detach and do a few test flights of up to 90 seconds. Those will be the first heavier-than-air flights on another planet — powered flight, in other words, rather than, say, a balloon filled with gaseous hydrogen.

The craft will operate mostly autonomously, since the half-hour round trip for commands would be far too long for an Earth-based pilot to operate it. It has its own solar cells and batteries, plus little landing feet, and will attempt flights of increasing distance from the rover over a 30-day period. It should go about three meters in the air and may eventually get hundreds of meters away from its partner.

Mars 2020 is estimated to be ready to launch next summer, arriving at its destination early in 2021. Of course in the meantime we’ve still got Curiosity and Insight up there, so if you want the latest from Mars, you’ve got plenty of options to choose from.


Source: The Tech Crunch

Read More

Tiny claws let drones perch like birds and bats

Posted by on Mar 14, 2019 in Artificial Intelligence, biomimesis, biomimetic, drones, Gadgets, Hardware, Robotics, Science | 0 comments

Drones are useful in countless ways, but that usefulness is often limited by the time they can stay in the air. Shouldn’t drones be able to take a load off too? With these special claws attached, they can perch or hang with ease, conserving battery power and vastly extending their flight time.

The claws, created by a highly multinational team of researchers I’ll list at the end, are inspired by birds and bats. The team noted that many flying animals have specially adapted feet or claws suited to attaching the creature to its favored surface. Sometimes they sit, sometimes they hang, sometimes they just kind of lean on it and don’t have to flap as hard.

As the researchers write:

In all of these cases, some suitably shaped part of the animal’s foot interacts with a structure in the environment and facilitates that less lift needs to be generated or that power flight can be completely suspended. Our goal is to use the same concept, which is commonly referred to as “perching,” for UAVs [unmanned aerial vehicles].

“Perching,” you say? Go on…

We designed a modularized and actuated landing gear framework for rotary-wing UAVs consisting of an actuated gripper module and a set of contact modules that are mounted on the gripper’s fingers.

This modularization substantially increased the range of possible structures that can be exploited for perching and resting as compared with avian-inspired grippers.

Instead of trying to build one complex mechanism, like a pair of articulating feet, the team gave the drones a set of specially shaped 3D-printed static modules and one big gripper.

The drone surveys its surroundings using lidar or some other depth-aware sensor. This lets it characterize surfaces nearby and match those to a library of examples that it knows it can rest on.

Squared-off edges like those on the top right can be rested on as in A, while a pole can be balanced on as in B.

If the drone sees and needs to rest on a pole, it can grab it from above. If it’s a horizontal bar, it can grip it and hang below, flipping up again when necessary. If it’s a ledge, it can use a little cutout to steady itself against the corner, letting it shut off or all its motors. These modules can easily be swapped out or modified depending on the mission.

I have to say the whole thing actually seems to work remarkably well for a prototype. The hard part appears to be the recognition of useful surfaces and the precise positioning required to land on them properly. But it’s useful enough — in professional and military applications especially, one suspects — that it seems likely to be a common feature in a few years.

The paper describing this system was published in the journal Science Robotics. I don’t want to leave anyone out, so it’s by: Kaiyu Hang, Ximin Lyu, Haoran Song, Johannes A. Stork , Aaron M. Dollar, Danica Kragic and Fu Zhang, from Yale, the Hong Kong University of Science and Technology, the University of Hong Kong, and the KTH Royal Institute of Technology.


Source: The Tech Crunch

Read More

Opportunity’s last Mars panorama is a showstopper

Posted by on Mar 13, 2019 in Gadgets, Government, Hardware, jpl, mars, mars rover, mars rovers, NASA, Opportunity, Science, Space, TC | 0 comments

The Opportunity Mars Rover may be officially offline for good, but its legacy of science and imagery is ongoing — and NASA just shared the last (nearly) complete panorama the robot sent back before it was blanketed in dust.

After more than 5,000 days (or rather sols) on the Martian surface, Opportunity found itself in Endeavour Crater, specifically in Perseverance Valley on the western rim. For the last month of its active life, it systematically imaged its surroundings to create another of its many impressive panoramas.

Using the Pancam, which shoots sequentially through blue, green and deep red (near-infrared) filters, it snapped 354 images of the area, capturing a broad variety of terrain as well as bits of itself and its tracks into the valley. You can click the image below for the full annotated version.

It’s as perfect and diverse an example of the Martian landscape as one could hope for, and the false-color image (the flatter true-color version is here) has a special otherworldly beauty to it, which is only added to by the poignancy of this being the rover’s last shot. In fact, it didn’t even finish — a monochrome region in the lower left shows where it needed to add color next.

This isn’t technically the last image the rover sent, though. As the fatal dust storm closed in, Opportunity sent one last thumbnail for an image that never went out: its last glimpse of the sun.

After this the dust cloud so completely covered the sun that Opportunity was enveloped in pitch darkness, as its true last transmission showed:

All the sparkles and dots are just noise from the image sensor. It would have been complete dark — and for weeks on end, considering the planetary scale of the storm.

Opportunity had a hell of a good run, lasting and traveling many times what it was expected to and exceeding even the wildest hopes of the team. That right up until its final day it was capturing beautiful and valuable data is testament to the robustness and care with which it was engineered.


Source: The Tech Crunch

Read More