Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Teams autonomously mapping the depths take home millions in Ocean Discovery Xprize

Posted by on May 31, 2019 in Artificial Intelligence, conservation, Gadgets, Hardware, Robotics, Science, TC, XPRIZE | 0 comments

There’s a whole lot of ocean on this planet, and we don’t have much of an idea what’s at the bottom of most of it. That could change with the craft and techniques created during the Ocean Discovery Xprize, which had teams competing to map the sea floor quickly, precisely and autonomously. The winner just took home $4 million.

A map of the ocean would be valuable in and of itself, of course, but any technology used to do so could be applied in many other ways, and who knows what potential biological or medical discoveries hide in some nook or cranny a few thousand fathoms below the surface?

The prize, sponsored by Shell, started back in 2015. The goal was, ultimately, to create a system that could map hundreds of square kilometers of the sea floor at a five-meter resolution in less than a day — oh, and everything has to fit in a shipping container. For reference, existing methods do nothing like this, and are tremendously costly.

But as is usually the case with this type of competition, the difficulty did not discourage the competitors — it only spurred them on. Since 2015, then, the teams have been working on their systems and traveling all over the world to test them.

Originally the teams were to test in Puerto Rico, but after the devastating hurricane season of 2017, the whole operation was moved to the Greek coast. Ultimately after the finalists were selected, they deployed their craft in the waters off Kalamata and told them to get mapping.

Team GEBCO’s surface vehicle

“It was a very arduous and audacious challenge,” said Jyotika Virmani, who led the program. “The test itself was 24 hours, so they had to stay up, then immediately following that was 48 hours of data processing after which they had to give us the data. It takes more trad companies about 2 weeks or so to process data for a map once they have the raw data — we’re pushing for real time.”

This wasn’t a test in a lab bath or pool. This was the ocean, and the ocean is a dangerous place. But amazingly there were no disasters.

“Nothing was damaged, nothing imploded,” she said. “We ran into weather issues, of course. And we did lose one piece of technology that was subsequently found by a Greek fisherman a few days later… but that’s another story.”

At the start of the competition, Virmani said, there was feedback from the entrants that the autonomous piece of the task was simply not going to be possible. But the last few years have proven it to be so, given that the winning team not only met but exceeded the requirements of the task.

“The winning team mapped more than 250 square kilometers in 24 hours, at the minimum of five meters resolution, but around 140 was more than five meters,” Virmani told me. “It was all unmanned: An unmanned surface vehicle that took the submersible out, then recovered it at sea, unmanned again, and brought it back to port. They had such great control over it — they were able to change its path and its programming throughout that 24 hours as they needed to.” (It should be noted that unmanned does not necessarily mean totally hands-off — the teams were permitted a certain amount of agency in adjusting or fixing the craft’s software or route.)

A five-meter resolution, if you can’t quite picture it, would produce a map of a city that showed buildings and streets clearly, but is too coarse to catch, say, cars or street signs. When you’re trying to map two-thirds of the globe, though, this resolution is more than enough — and infinitely better than the nothing we currently have. (Unsurprisingly, it’s also certainly enough for an oil company like Shell to prospect new deep-sea resources.)

The winning team was GEBCO, composed of veteran hydrographers — ocean mapping experts, you know. In addition to the highly successful unmanned craft (Sea-Kit, already cruising the English Channel for other purposes), the team did a lot of work on the data-processing side, creating a cloud-based solution that helped them turn the maps around quickly. (That may also prove to be a marketable service in the future.) They were awarded $4 million, in addition to their cash for being selected as a finalist.

The runner up was Kuroshio, which had great resolution but was unable to map the full 250 km2 due to weather problems. They snagged a million.

A bonus prize for having the submersible track a chemical signal to its source didn’t exactly have a winner, but the teams’ entries were so impressive that the judges decided to split the million between the Tampa Deep Sea Xplorers and Ocean Quest, which amazingly enough is made up mostly of middle-schoolers. The latter gets $800,000, which should help pay for a few new tools in the shop there.

Lastly, a $200,000 innovation prize was given to Team Tao out of the U.K., which had a very different style to its submersible that impressed the judges. While most of the competitors opted for a craft that went “lawnmower-style” above the sea floor at a given depth, Tao’s craft dropped down like a plumb bob, pinging the depths as it went down and back up before moving to a new spot. This provides a lot of other opportunities for important oceanographic testing, Virmani noted.

Having concluded the prize, the organization has just a couple more tricks up its sleeve. GEBCO, which stands for General Bathymetric Chart of the Oceans, is partnering with The Nippon Foundation on Seabed 2030, an effort to map the entire sea floor over the next decade and provide that data to the world for free.

And the program is also — why not? — releasing an anthology of short sci-fi stories inspired by the idea of mapping the ocean. “A lot of our current technology is from the science fiction of the past,” said Virmani. “So we told the authors, imagine we now have a high-resolution map of the sea floor, what are the next steps in ocean tech and where do we go?” The resulting 19 stories, written from all 7 continents (yes, one from Antarctica), will be available June 7.


Source: The Tech Crunch

Read More

Google’s Translatotron converts one spoken language to another, no text involved

Posted by on May 15, 2019 in Artificial Intelligence, Google, machine learning, machine translation, Science, Translation | 0 comments

Every day we creep a little closer to Douglas Adams’ famous and prescient Babel fish. A new research project from Google takes spoken sentences in one language and outputs spoken words in another — but unlike most translation techniques, it uses no intermediate text, working solely with the audio. This makes it quick, but more importantly lets it more easily reflect the cadence and tone of the speaker’s voice.

Translatotron, as the project is called, is the culmination of several years of related work, though it’s still very much an experiment. Google’s researchers, and others, have been looking into the possibility of direct speech-to-speech translation for years, but only recently have those efforts borne fruit worth harvesting.

Translating speech is usually done by breaking down the problem into smaller sequential ones: turning the source speech into text (speech-to-text, or STT), turning text in one language into text in another (machine translation), and then turning the resulting text back into speech (text-to-speech, or TTS). This works quite well, really, but it isn’t perfect; each step has types of errors it is prone to, and these can compound one another.

Furthermore, it’s not really how multilingual people translate in their own heads, as testimony about their own thought processes suggests. How exactly it works is impossible to say with certainty, but few would say that they break down the text and visualize it changing to a new language, then read the new text. Human cognition is frequently a guide for how to advance machine learning algorithms.

Spectrograms of source and translated speech. The translation, let us admit, is not the best. But it sounds better!

To that end, researchers began looking into converting spectrograms, detailed frequency breakdowns of audio, of speech in one language directly to spectrograms in another. This is a very different process from the three-step one, and has its own weaknesses, but it also has advantages.

One is that, while complex, it is essentially a single-step process rather than multi-step, which means, assuming you have enough processing power, Translatotron could work quicker. But more importantly for many, the process makes it easy to retain the character of the source voice, so the translation doesn’t come out robotically, but with the tone and cadence of the original sentence.

Naturally this has a huge impact on expression, and someone who relies on translation or voice synthesis regularly will appreciate that not only what they say comes through, but how they say it. It’s hard to overstate how important this is for regular users of synthetic speech.

The accuracy of the translation, the researchers admit, is not as good as the traditional systems, which have had more time to hone their accuracy. But many of the resulting translations are (at least partially) quite good, and being able to include expression is too great an advantage to pass up. In the end, the team modestly describes their work as a starting point demonstrating the feasibility of the approach, though it’s easy to see that it is also a major step forward in an important domain.

The paper describing the new technique was published on Arxiv, and you can browse samples of speech, from source to traditional translation to Translatotron, at this page. Just be aware that these are not all selected for the quality of their translation, but serve more as examples of how the system retains expression while getting the gist of the meaning.


Source: The Tech Crunch

Read More

Virgin Galactic is ‘coming home’ to Spaceport America in New Mexico

Posted by on May 10, 2019 in richard branson, sir richard branson, Space, TC, Transportation, Virgin Galactic | 0 comments

Aspiring space tourism outfit Virgin Galactic has just announced its readiness to shift its operations to New Mexico’s Spaceport America, from which the company’s first commercial flights will take off. “Virgin Galactic is coming home to New Mexico where together we will open space to change the world for good,” said Virgin founder Sir Richard Branson at a press event.

The plan isn’t exactly a surprise, since Virgin Galactic and New Mexico collaborated on the creation of the spaceport, which at present is the only thing of its kind in the world. But moving from a testing and R&D hangar to a place where actual customers will board the spaceships is a major milestone.

I talked with George Whitesides, VG’s CEO, about what the move really means and, of course, when it will actually happen.

“We’re fulfilling the commitment that we made years ago to bring an operational spaceline to the world’s first purpose-built spaceport,” he told me. “So what does that mean? One, the vehicles are moving, and all the stuff that goes along with operating those vehicles. And all the people that operate the vehicles, and the staff that are so-called customer-facing. And you’ll have all the relevant supply chain folks and core infrastructure folks who are associated with running a spaceline.”

Right now, that rather complicated list really only adds up to about a hundred employees — a large part of the workforce will remain in Mojave, where R&D and new vehicle engineering will continue to be based in the form of The Spaceship Company.

“As we move towards commercial services, we’re thinking more about what comes next, like hypersonic and point to point spaceflight,” Whitesides said.

That said, VG isn’t finished with its existing craft just yet. You can expect a couple more, depending on what the engineers think is necessary. But it’s not a “huge number.”

Moving to Spaceport America from its Mojave facilities is being undertaken now for several reasons, Whitesides explained. In the first place, the craft is pretty much ready to go.

“The last flight we did, we basically demonstrated a full commercial profile, including the interior of the vehicle,” he said. “Not only did we, you know, go up to space and come down, but because Beth was in the back — Beth Moses, our flight instructor — she was sort of our mock passenger. She got up a couple times and moved around, she was able to verify our cabin conditions. So we started thinking, maybe we’re at a place where we could move.”

The paperwork from the FAA and other authorities is in order. The spaceport has been ready for some time, too, at least the difficult parts like the runway, fuel infrastructure, communications equipment and so on. Right now it’s more like they need to pick the color for the carpet and buy the flatscreens and fridges for inside.

“But the people perspective is a key part of this,” Whitesides continued. “These people have families, they have kids. We always thought, wouldn’t it be nice to move over the summer, so they don’t have to leave in the middle of a school year? If we start now, our employees can more easily integrate into the community in New Mexico. So we said, actually let’s just do this right now. It’s a bold choice and a big deal but it’s the right thing to do.”

And what about the vehicles, VMS Eve and VSS Unity? How will they get there?

“That’s the great thing about an air launch system,” said Whitesides. “It’s the easiest part, in a way. Once all the other stuff is down there we’ll look deep into each other’s eyes, and say ‘are we ready?’ And then we put together the spaceship and go. It’s built to fly longer distances than that — so we’ll start the day with our base of operations in Mojave, and end the day with our base of operations in New Mexico.”

And a lovely base it will be. The spaceport, designed by Foster & Partners in the U.K., is a striking shape that rises out of the desert and should have all the facilities necessary to run a commercial spaceline — it’s probably the only place in the world that would work for that purpose, which makes sense as it was built for it.

“Because we’re horizontal take-off and landing, operationally on the ground side, it basically looks like an airport. The coolest-looking airport ever, but an airport,” Whitesides said. “It’s got a big beautiful runway — but you’ll notice that it’s got Earth to space comms links, this special antenna, and instead of a tower we have a mission control, and of course there’s the special ground tankage — oxidizer tanks and that kind of propulsion related infrastructure.”

The airspace surrounding the spaceport is also restricted all the way from the surface up to infinity, which helps when your flights span multiple air traffic levels. “And it’s already a mile up, so that’s an asset,” Whitesides observed. A mile closer to space — more a convenience than a necessity, but it’s a good start.

The actual moving operations should take place over the summer. The remaining test flights aren’t yet scheduled, but I’m sure that will soon change — and you’ll definitely hear about it when the first commercial flights are put on the books.


Source: The Tech Crunch

Read More

Final Fantasy VII Remake trailer shows redo of the classic in action

Posted by on May 9, 2019 in final fantasy, final fantasy vii, Gaming, playstation | 0 comments

’90s kids will remember this. Final Fantasy VII, the game that busted JPRGs out of their niche and helped make the original PlayStation the must-have console of the generation, is, as we all know, being remade. But until today it wasn’t really clear just what “remade” actually meant.

The teaser trailer put online today is packed full of details, though of course they may change over the course of development. It’s exciting not just for fans of this game, but for those of us who prefer VI and are deeply interested in how that (superior) game might get remade. Or VIII or IX, honestly.

The trailer shows the usual suspects traversing the first main area of the game, Midgar. A mix of cutscenes and gameplay presents a game that looks to be more like Final Fantasy XV than anything else. This may be a bitter pill for some — while I doubt anyone really expected a perfect recreation of the original’s turn-based combat, XV has been roundly criticized for oversimplification of the franchise’s occasionally quite complex systems.

With a single button for “attack,” another for a special, and the rest of the commands relegated to a hidden menu, it looks a lot more like an action RPG than the original. A playable Barret suggests the ability to switch between characters either at will or when the story demands. But there’s nothing to imply the hidden depths of, say, XII’s programmatic combat or even XIII’s convoluted breakage system.

But dang does it look good. Aerith (not “Aeris” as some would have it) looks sweet, Cloud is stone-faced and genie-panted, and Barret is buff and gruff, all as detailed and realistic we have any right to expect. The city looks wonderfully rendered and clearly they’re not phoning in the effects.

It’s more than a little possible that the process for remaking VII is something that the company is considering for application to other titles (I can see going all the way back to IV), but with this game being the most obvious cash cow and test platform for it.

“More to come in June,” the video concludes.

Will we enter a gaming era rife with remakes preying on our nostalgia, sucking our wallets dry so we can experience a game for the 4th or 5th time, but with particle effects and streamlined menus? I hope so. Watch the full teaser below:


Source: The Tech Crunch

Read More

Scientists pull speech directly from the brain

Posted by on Apr 24, 2019 in Artificial Intelligence, Biotech, brain-computer interface, Health, Science, synthetic speech, TC, UCSF | 0 comments

In a feat that could eventually unlock the possibility of speech for people with severe medical conditions, scientists have successfully recreated the speech of healthy subjects by tapping directly into their brains. The technology is a long, long way from practical application but the science is real and the promise is there.

Edward Chang, neurosurgeon at UC San Francisco and co-author of the paper published today in Nature, explained the impact of the team’s work in a press release: “For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity. This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.”

To be perfectly clear, this isn’t some magic machine that you sit in and its translates your thoughts into speech. It’s a complex and invasive process that decodes not exactly what the subject is thinking but what they were actually speaking.

Led by speech scientist Gopala Anumanchipalli, the experiment involved subjects who had already had large electrode arrays implanted in their brains for a different medical procedure. The researchers had these lucky people read out several hundred sentences aloud while closely recording the signals detected by the electrodes.

The electrode array in question.

See, it happens that the researchers know a certain pattern of brain activity that comes after you think of and arrange words (in cortical areas like Wernicke’s and Broca’s) and before the final signals are sent from the motor cortex to your tongue and mouth muscles. There’s a sort of intermediate signal between those that Anumanchipalli and his co-author, grad student Josh Chartier, previously characterized, and which they thought may work for the purposes of reconstructing speech.

Analyzing the audio directly let the team determine what muscles and movements would be involved when (this is pretty established science), and from this they built a sort of virtual model of the person’s vocal system.

They then mapped the brain activity detected during the session to that virtual model using a machine learning system, essentially allowing a recording of a brain to control a recording of a mouth. It’s important to understand that this isn’t turning abstract thoughts into words — it’s understanding the brain’s concrete instructions to the muscles of the face, and determining from those what words those movements would be forming. It’s brain reading, but it isn’t mind reading.

The resulting synthetic speech, while not exactly crystal clear, is certainly intelligible. And set up correctly, it could be capable of outputting 150 words per minute from a person who may otherwise be incapable of speech.

“We still have a ways to go to perfectly mimic spoken language,” said Chartier. “Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what’s currently available.”

For comparison, a person so afflicted, for instance with a degenerative muscular disease, often has to speak by spelling out words one letter at a time with their gaze. Picture 5-10 words per minute, with other methods for more disabled individuals going even slower. It’s a miracle in a way that they can communicate at all, but this time-consuming and less than natural method is a far cry from the speed and expressiveness of real speech.

If a person was able to use this method, they would be far closer to ordinary speech, though perhaps at the cost of perfect accuracy. But it’s not a magic bullet.

The problem with this method is that it requires a great deal of carefully collected data from what amounts to a healthy speech system, from brain to tip of the tongue. For many people it’s no longer possible to collect this data, and for others the invasive method of collection will make it impossible for a doctor to recommend. And conditions that have prevented a person from ever talking prevent this method from working as well.

The good news is that it’s a start, and there are plenty of conditions it would work for, theoretically. And collecting that critical brain and speech recording data could be done preemptively in cases where a stroke or degeneration is considered a risk.


Source: The Tech Crunch

Read More