Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Teams autonomously mapping the depths take home millions in Ocean Discovery Xprize

Posted by on May 31, 2019 in Artificial Intelligence, conservation, Gadgets, Hardware, Robotics, Science, TC, XPRIZE | 0 comments

There’s a whole lot of ocean on this planet, and we don’t have much of an idea what’s at the bottom of most of it. That could change with the craft and techniques created during the Ocean Discovery Xprize, which had teams competing to map the sea floor quickly, precisely and autonomously. The winner just took home $4 million.

A map of the ocean would be valuable in and of itself, of course, but any technology used to do so could be applied in many other ways, and who knows what potential biological or medical discoveries hide in some nook or cranny a few thousand fathoms below the surface?

The prize, sponsored by Shell, started back in 2015. The goal was, ultimately, to create a system that could map hundreds of square kilometers of the sea floor at a five-meter resolution in less than a day — oh, and everything has to fit in a shipping container. For reference, existing methods do nothing like this, and are tremendously costly.

But as is usually the case with this type of competition, the difficulty did not discourage the competitors — it only spurred them on. Since 2015, then, the teams have been working on their systems and traveling all over the world to test them.

Originally the teams were to test in Puerto Rico, but after the devastating hurricane season of 2017, the whole operation was moved to the Greek coast. Ultimately after the finalists were selected, they deployed their craft in the waters off Kalamata and told them to get mapping.

Team GEBCO’s surface vehicle

“It was a very arduous and audacious challenge,” said Jyotika Virmani, who led the program. “The test itself was 24 hours, so they had to stay up, then immediately following that was 48 hours of data processing after which they had to give us the data. It takes more trad companies about 2 weeks or so to process data for a map once they have the raw data — we’re pushing for real time.”

This wasn’t a test in a lab bath or pool. This was the ocean, and the ocean is a dangerous place. But amazingly there were no disasters.

“Nothing was damaged, nothing imploded,” she said. “We ran into weather issues, of course. And we did lose one piece of technology that was subsequently found by a Greek fisherman a few days later… but that’s another story.”

At the start of the competition, Virmani said, there was feedback from the entrants that the autonomous piece of the task was simply not going to be possible. But the last few years have proven it to be so, given that the winning team not only met but exceeded the requirements of the task.

“The winning team mapped more than 250 square kilometers in 24 hours, at the minimum of five meters resolution, but around 140 was more than five meters,” Virmani told me. “It was all unmanned: An unmanned surface vehicle that took the submersible out, then recovered it at sea, unmanned again, and brought it back to port. They had such great control over it — they were able to change its path and its programming throughout that 24 hours as they needed to.” (It should be noted that unmanned does not necessarily mean totally hands-off — the teams were permitted a certain amount of agency in adjusting or fixing the craft’s software or route.)

A five-meter resolution, if you can’t quite picture it, would produce a map of a city that showed buildings and streets clearly, but is too coarse to catch, say, cars or street signs. When you’re trying to map two-thirds of the globe, though, this resolution is more than enough — and infinitely better than the nothing we currently have. (Unsurprisingly, it’s also certainly enough for an oil company like Shell to prospect new deep-sea resources.)

The winning team was GEBCO, composed of veteran hydrographers — ocean mapping experts, you know. In addition to the highly successful unmanned craft (Sea-Kit, already cruising the English Channel for other purposes), the team did a lot of work on the data-processing side, creating a cloud-based solution that helped them turn the maps around quickly. (That may also prove to be a marketable service in the future.) They were awarded $4 million, in addition to their cash for being selected as a finalist.

The runner up was Kuroshio, which had great resolution but was unable to map the full 250 km2 due to weather problems. They snagged a million.

A bonus prize for having the submersible track a chemical signal to its source didn’t exactly have a winner, but the teams’ entries were so impressive that the judges decided to split the million between the Tampa Deep Sea Xplorers and Ocean Quest, which amazingly enough is made up mostly of middle-schoolers. The latter gets $800,000, which should help pay for a few new tools in the shop there.

Lastly, a $200,000 innovation prize was given to Team Tao out of the U.K., which had a very different style to its submersible that impressed the judges. While most of the competitors opted for a craft that went “lawnmower-style” above the sea floor at a given depth, Tao’s craft dropped down like a plumb bob, pinging the depths as it went down and back up before moving to a new spot. This provides a lot of other opportunities for important oceanographic testing, Virmani noted.

Having concluded the prize, the organization has just a couple more tricks up its sleeve. GEBCO, which stands for General Bathymetric Chart of the Oceans, is partnering with The Nippon Foundation on Seabed 2030, an effort to map the entire sea floor over the next decade and provide that data to the world for free.

And the program is also — why not? — releasing an anthology of short sci-fi stories inspired by the idea of mapping the ocean. “A lot of our current technology is from the science fiction of the past,” said Virmani. “So we told the authors, imagine we now have a high-resolution map of the sea floor, what are the next steps in ocean tech and where do we go?” The resulting 19 stories, written from all 7 continents (yes, one from Antarctica), will be available June 7.


Source: The Tech Crunch

Read More

Cat vs best and worst robot vacuum cleaners 

Posted by on May 11, 2019 in cat, Gadgets, Home Appliances, Home Automation, laser, Roborock S6, robot, robotic vacuum cleaner, Robotics, Rowenta, TC, Vacuum | 0 comments

If you’ve flirted with the idea of buying a robot vacuum you may also have stepped back from the brink in unfolding horror at the alphabetic soup of branded discs popping into view. Consumer choice sounds like a great idea until you’ve tried to get a handle on the handle-less vacuum space.

Amazon offers an A to Z linklist of “top brands” that’s only a handful of letters short of a full alphabetic set. The horror.

What awaits the unseasoned robot vacuum buyer as they resign themselves to hours of online research to try to inform — or, well, form — a purchase decision is a seeming endless permutation of robot vac reviews and round-ups.

Unfortunately there are just so many brands in play that all these reviews tend to act as fuel, feeding a growing black hole of indecision that sucks away at your precious spare time, demanding you spend more and more of it reading about robots that suck (when you could, let’s be frank, be getting on with the vacuuming task yourself) — only to come up for air each time even less convinced that buying a robot dirtbag is at all a good idea.

Reader, I know, because I fell into this hole. And it was hellish. So in the spirit of trying to prevent anyone else falling prey to convenience-based indecision I am — apologies in advance — adding to the pile of existing literature about robot vacuums with a short comparative account that (hopefully) helps cut through some of the chaff to the dirt-pulling chase.

Here’s the bottom line: Budget robot vacuums that lack navigational smarts are simply not worth your money, or indeed your time.

Yes, that’s despite the fact they are still actually expensive vacuum cleaners.

Basically these models entail overpaying for a vacuum cleaner that’s so poor you’ll still have to do most of the job yourself (i.e. with a non-robotic vacuum cleaner).

It’s the very worst kind of badly applied robotics.

Abandon hope of getting anything worth your money at the bottom end of the heap. I know this because, alas, I tried — opting, finally and foolishly (but, in my defence, at a point of near desperation after sifting so much virtual chaff the whole enterprise seemed to have gained lottery odds of success and I frankly just wanted my spare time back), for a model sold by a well-known local retailer.

It was a budget option but I assumed — or, well, hoped — the retailer had done its homework and picked a better-than-average choice. Or at least something that, y’know, could suck dust.

The brand in question (Rowenta) sat alongside the better known (and a bit more expensive) iRobot on the shop shelf. Surely that must count for something? I imagined wildly. Reader, that logic is a trap.

I can’t comment on the comparative performance of iRobot’s bots, which I have not personally tested, but I do not hesitate to compare a €180 (~$200) Rowenta-branded robot vacuum to a very expensive cat toy.

This robot vacuum was spectacularly successful at entertaining the cat — presumably on account of its dumb disposition, bouncing stupidly off of furniture owing to a total lack of navigational smarts. (Headbutting is a pretty big clue to how stupid a robot it is, as it’s never a stand-in for intelligence even when encountered in human form.)

Even more tantalizingly, from the cat’s point of view, the bot featured two white and whisker-like side brushes that protrude and spin at paw-tempting distance. In short: Pure robotic catnip.

The cat did not stop attacking the bot’s whiskers the whole time it was in operation. That certainly added to the obstacles getting in its way. But the more existential problem was it wasn’t sucking very much at all.

At the end of its first concluded ‘clean’, after it somehow managed to lurch its way back to first bump and finally hump its charging hub, I extracted the bin and had to laugh at the modest sized furball within. I’ve found larger clumps of dust gathering themselves in corners. So: Full marks for cat-based entertainment but as a vacuum cleaner it was horrible.

At this point I did what every sensible customer does when confronted with an abject lemon: Returned it for a full refund. And that, reader, might have been that for me and the cat and robot vacs. Who can be bothered to waste so much money and time for what appeared laughably incremental convenience? Even with a steady supply of cat fur to contend with.

But as luck would have it a Roborock representative emailed to ask if I would like to review their latest top-of-the-range model — which, at €549, does clock in at the opposite end of the price scale; ~3x the pitiful Rowenta. So of course I jumped at the chance to give the category a second spin — to see if a smarter device could impress me and not just tickle the cat’s fancy.

Clearly the price difference here, at the top vs the bottom of the range, is substantial. And yet, if you bought a car that was 3x times cheaper than a Ferrari you’d still expect not just that the wheels stay on but that it can actually get you somewhere, in good time and do so without making you horribly car sick.

Turns out buyers of robot vacuums need to tread far more carefully.

Here comes the bookending top-line conclusion: Robot vacuums are amazing. A modern convenience marvel. But — and it’s a big one — only if you’re willing to shell out serious cash to get a device that actually does the job intended.

Roborock S6: It’s a beast at gobbling your furry friend’s dander

Comparing the Roborock S6 and the Rowenta Smart Force Essential Aqua RR6971WH (to give it its full and equally terrible name) is like comparing a high-end electric car with a wind-up kid’s toy.

Where the latter product was so penny-pinching the company hadn’t even paid to include in the box a user manual that contained actual words — opting, we must assume, to save on translation costs by producing a comic packed with inscrutable graphics and bizarro don’t do diagrams which only served to cement the fast-cooling buyer’s conviction they’d been sold a total lemon — the Roborock’s box contains a well written paper manual that contains words and clearly labeled diagrams. What a luxury!

At the same time there’s not really that much you need to grok to get your head around operating the Roborock. After a first pass to familiarize yourself with its various functions it’s delightfully easy to use. It will even produce periodic vocal updates — such as telling you it’s done cleaning and is going back to base. (Presumably in case you start to worry it’s gone astray under the bed. Or that quiet industry is a front for brewing robotic rebellion against indentured human servitude.)

One button starts a full clean — and this does mean full thanks to on-board laser navigation that allows the bot to map the rooms in real-time. This means you get methodical passes, minimal headbutting and only occasional spots missed. (Another button will do a spot clean if the S6 does miss something or there’s a fresh spill that needs tidying — you just lift the bot to where you want it and hit the appropriate spot.)

There is an app too, if you want to access extra features like being able to tell it to go clean a specific room, schedule cleans or set no-go zones. But, equally delightfully, there’s no absolute need to hook the bot to your wi-fi just to get it to do its primary job. All core features work without the faff of having to connect it to the Internet — nor indeed the worry of who might get access to your room-mapping data. From a privacy point of view this wi-fi-less app-free operation is a major plus.

In a small apartment with hard flooring the only necessary prep is a quick check to clear stuff like charging cables and stray socks off the floor. You can of course park dining chairs on the table to offer the bot a cleaner sweep. Though I found the navigation pretty adept at circling chair legs. Sadly the unit is a little too tall to make it under the sofa.

The S6 includes an integrated mopping function, which works incredibly well on lino-style hard flooring (but won’t be any use if you only have carpets). To mop you fill the water tank attachment; velcro-fix a dampened mop cloth to the bottom; and slide-clip the whole unit under the bot’s rear. Then you hit the go button and it’ll vacuum and mop in the same pass.

In my small apartment the S6 had no trouble doing a full floor clean in under an hour, without needing to return to base to recharge in the middle. (Roborock says the S6 will drive for up to three hours on a single charge.)

It also did not seem to get confused by relatively dark flooring in my apartment — which some reviews had suggested can cause headaches for robot vacuums by confusing their cliff sensors.

After that first clean I popped the lid to check on the contents of the S6’s transparent lint bin — finding an impressive quantity of dusty fuzz neatly wadded therein. This was really just robot vacuum porn, though; the gleaming floors spoke for themselves on the quality of the clean.

The level of dust gobbled by the S6 vs the Rowenta underlines the quality difference between the bottom and top end of the robot vacuum category.

So where the latter’s plastic carapace immediately became a magnet for all the room dust it had kicked up but spectacularly failed to suck, the S6’s gleaming white shell has stayed remarkably lint-free, acquiring only a minimal smattering of cat hairs over several days of operation — while the floors it’s worked have been left visibly dust- and fur-free. (At least until the cat got to work dirtying them again.)

Higher suction power, better brushes and a higher quality integrated filter appear to make all the difference. The S6 also does a much better cleaning job a lot more quietly. Roborock claims it’s 50% quieter than the prior model (the S5) and touts it as its quietest robot vacuum yet.

It’s not super silent but is quiet enough when cleaning hard floors not to cause a major disturbance if you’re working or watching something in the same room. Though the novelty can certainly be distracting.

Even the look of the S6 exudes robotic smarts — with its raised laser-housing bump resembling a glowing orange cylonic eye-slot.

Although I was surprised, at first glance, by the single, rather feeble looking side brush vs the firm pair the Rowenta had fixed to its undercarriage. But again the S6’s tool is smartly applied — stepping up and down speed depending on what the bot’s tackling. I found it could miss the odd bit of lint or debris such as cat litter but when it did these specs stood out as the exception on an otherwise clean floor.

It’s also true that the cat did stick its paw in again to try attacking the S6’s single spinning brush. But these attacks were fewer and a lot less fervent than vs the Rowenta, as if the bot’s more deliberate navigation commanded greater respect and/or a more considered ambush. So it appears that even to a feline eye the premium S6 looks a lot less like a dumb toy.

Cat plots another ambush while the S6 works the floor

On a practical front, the S6’s lint bin has a capacity of 480ml. Roborock suggests cleaning it out weekly (assuming you’re using the bot every week), as well as washing the integrated dust filter (it supplies a spare in the box so you can switch one out to clean it and have enough time for it to fully dry before rotating it back into use).

If you use the mopping function the supplied reusable mop cloths do need washing afterwards too (Roborock also includes a few disposable alternatives in the box but that seems a pretty wasteful option when it’s easy enough to stick a reusable cloth in with a load of laundry or give it a quick wash yourself). So if you’re chasing a fully automated, robot-powered, end-to-cleaning-chores dream be warned there’s still a little human elbow grease required to keep everything running smoothly.

Still, there’s no doubt a top-of-the-range robot vacuum like the S6 will save you time cleaning.

If you can justify the not inconsiderable cost involved in buying this extra time by shelling out for a premium robot vacuum that’s smart enough to clean effectively all that’s left to figure out is how to spend your time windfall wisely — resisting the temptation to just put your feet up and watch the clever little robot at work.


Source: The Tech Crunch

Read More

Mars helicopter bound for the Red Planet takes to the air for the first time

Posted by on Mar 28, 2019 in drones, Gadgets, Government, Hardware, jpl, mars 2020, mars helicopter, NASA, Robotics, Science, Space, TC, UAVs | 0 comments

The Mars 2020 mission is on track for launch next year, and nesting inside the high-tech new rover heading that direction is a high-tech helicopter designed to fly in the planet’s nearly non-existent atmosphere. The actual aircraft that will fly on the Martian surface just took its first flight and its engineers are over the moon.

“The next time we fly, we fly on Mars,” said MiMi Aung, who manages the project at JPL, in a news release. An engineering model that was very close to final has over an hour of time in the air, but these two brief test flights were the first and last time the tiny craft will take flight until it does so on the distant planet (not counting its “flight” during launch).

“Watching our helicopter go through its paces in the chamber, I couldn’t help but think about the historic vehicles that have been in there in the past,” she continued. “The chamber hosted missions from the Ranger Moon probes to the Voyagers to Cassini, and every Mars rover ever flown. To see our helicopter in there reminded me we are on our way to making a little chunk of space history as well.”

Artist’s impression of how the helicopter will look when it’s flying on Mars.

A helicopter flying on Mars is much like a helicopter flying on Earth, except of course for the slight differences that the other planet has a third less gravity and 99 percent less air. It’s more like flying at 100,000 feet, Aung suggested.

It has its own solar panel so it can explore more or less on its own.

The test rig they set up not only produces a near-vacuum, replacing the air with a thin, Mars-esque CO2 mix, but a “gravity offload” system simulates lower gravity by giving the helicopter a slight lift via a cable.

It flew at a whopping 2 inches of altitude for a total of a minute in two tests, which was enough to show the team that the craft (with all its 1,500 parts and four pounds) was ready to package up and send to the Red Planet.

“It was a heck of a first flight,” said tester Teddy Tzanetos. “The gravity offload system performed perfectly, just like our helicopter. We only required a 2-inch hover to obtain all the data sets needed to confirm that our Mars helicopter flies autonomously as designed in a thin Mars-like atmosphere; there was no need to go higher.”

A few months the Mars 2020 rover has landed, the helicopter will detach and do a few test flights of up to 90 seconds. Those will be the first heavier-than-air flights on another planet — powered flight, in other words, rather than, say, a balloon filled with gaseous hydrogen.

The craft will operate mostly autonomously, since the half-hour round trip for commands would be far too long for an Earth-based pilot to operate it. It has its own solar cells and batteries, plus little landing feet, and will attempt flights of increasing distance from the rover over a 30-day period. It should go about three meters in the air and may eventually get hundreds of meters away from its partner.

Mars 2020 is estimated to be ready to launch next summer, arriving at its destination early in 2021. Of course in the meantime we’ve still got Curiosity and Insight up there, so if you want the latest from Mars, you’ve got plenty of options to choose from.


Source: The Tech Crunch

Read More

Game streaming’s multi-industry melee is about to begin

Posted by on Mar 26, 2019 in Cloud, Gadgets, Gaming, Media, TC | 0 comments

Almost exactly 10 years ago, I was at GDC participating in a demo of a service I didn’t think could exist: OnLive. The company had promised high-definition, low-latency streaming of games at a time when real broadband was uncommon, mobile gaming was still defined by Bejeweled (though Angry Birds was about to change that), and Netflix was still mainly in the DVD-shipping business.

Although the demo went well, the failure of OnLive and its immediate successors to gain any kind of traction or launch beyond a few select markets indicated that while it may be in the future of gaming, streaming wasn’t in its present.

Well, now it’s the future. Bandwidth is plentiful, speeds are rising, games are shifting from things you buy to services you subscribe to, and millions prefer to pay a flat fee per month rather than worry about buying individual movies, shows, tracks, or even cheeses.

Consequently, as of this week — specifically as of Google’s announcement of Stadia on Tuesday — we see practically every major tech and gaming company attempting to do the same thing. Like the beginning of a chess game, the board is set or nearly so, and each company brings a different set of competencies and potential moves to the approaching fight. Each faces different challenges as well, though they share a few as a set.

Google and Amazon bring cloud-native infrastructure and familiarity online, but is that enough to compete with the gaming know-how of Microsoft, with its own cloud clout, or Sony, which made strategic streaming acquisitions and has a service up and running already? What of the third parties like Nvidia and Valve, publishers and storefronts that may leverage consumer trust and existing games libraries to jump start a rival? It’s a wide-open field, all right.

Before we examine them, however, it is perhaps worthwhile to entertain a brief introduction to the gaming space as it stands today and the trends that have brought it to this point.


Source: The Tech Crunch

Read More

Tiny claws let drones perch like birds and bats

Posted by on Mar 14, 2019 in Artificial Intelligence, biomimesis, biomimetic, drones, Gadgets, Hardware, Robotics, Science | 0 comments

Drones are useful in countless ways, but that usefulness is often limited by the time they can stay in the air. Shouldn’t drones be able to take a load off too? With these special claws attached, they can perch or hang with ease, conserving battery power and vastly extending their flight time.

The claws, created by a highly multinational team of researchers I’ll list at the end, are inspired by birds and bats. The team noted that many flying animals have specially adapted feet or claws suited to attaching the creature to its favored surface. Sometimes they sit, sometimes they hang, sometimes they just kind of lean on it and don’t have to flap as hard.

As the researchers write:

In all of these cases, some suitably shaped part of the animal’s foot interacts with a structure in the environment and facilitates that less lift needs to be generated or that power flight can be completely suspended. Our goal is to use the same concept, which is commonly referred to as “perching,” for UAVs [unmanned aerial vehicles].

“Perching,” you say? Go on…

We designed a modularized and actuated landing gear framework for rotary-wing UAVs consisting of an actuated gripper module and a set of contact modules that are mounted on the gripper’s fingers.

This modularization substantially increased the range of possible structures that can be exploited for perching and resting as compared with avian-inspired grippers.

Instead of trying to build one complex mechanism, like a pair of articulating feet, the team gave the drones a set of specially shaped 3D-printed static modules and one big gripper.

The drone surveys its surroundings using lidar or some other depth-aware sensor. This lets it characterize surfaces nearby and match those to a library of examples that it knows it can rest on.

Squared-off edges like those on the top right can be rested on as in A, while a pole can be balanced on as in B.

If the drone sees and needs to rest on a pole, it can grab it from above. If it’s a horizontal bar, it can grip it and hang below, flipping up again when necessary. If it’s a ledge, it can use a little cutout to steady itself against the corner, letting it shut off or all its motors. These modules can easily be swapped out or modified depending on the mission.

I have to say the whole thing actually seems to work remarkably well for a prototype. The hard part appears to be the recognition of useful surfaces and the precise positioning required to land on them properly. But it’s useful enough — in professional and military applications especially, one suspects — that it seems likely to be a common feature in a few years.

The paper describing this system was published in the journal Science Robotics. I don’t want to leave anyone out, so it’s by: Kaiyu Hang, Ximin Lyu, Haoran Song, Johannes A. Stork , Aaron M. Dollar, Danica Kragic and Fu Zhang, from Yale, the Hong Kong University of Science and Technology, the University of Hong Kong, and the KTH Royal Institute of Technology.


Source: The Tech Crunch

Read More

Opportunity’s last Mars panorama is a showstopper

Posted by on Mar 13, 2019 in Gadgets, Government, Hardware, jpl, mars, mars rover, mars rovers, NASA, Opportunity, Science, Space, TC | 0 comments

The Opportunity Mars Rover may be officially offline for good, but its legacy of science and imagery is ongoing — and NASA just shared the last (nearly) complete panorama the robot sent back before it was blanketed in dust.

After more than 5,000 days (or rather sols) on the Martian surface, Opportunity found itself in Endeavour Crater, specifically in Perseverance Valley on the western rim. For the last month of its active life, it systematically imaged its surroundings to create another of its many impressive panoramas.

Using the Pancam, which shoots sequentially through blue, green and deep red (near-infrared) filters, it snapped 354 images of the area, capturing a broad variety of terrain as well as bits of itself and its tracks into the valley. You can click the image below for the full annotated version.

It’s as perfect and diverse an example of the Martian landscape as one could hope for, and the false-color image (the flatter true-color version is here) has a special otherworldly beauty to it, which is only added to by the poignancy of this being the rover’s last shot. In fact, it didn’t even finish — a monochrome region in the lower left shows where it needed to add color next.

This isn’t technically the last image the rover sent, though. As the fatal dust storm closed in, Opportunity sent one last thumbnail for an image that never went out: its last glimpse of the sun.

After this the dust cloud so completely covered the sun that Opportunity was enveloped in pitch darkness, as its true last transmission showed:

All the sparkles and dots are just noise from the image sensor. It would have been complete dark — and for weeks on end, considering the planetary scale of the storm.

Opportunity had a hell of a good run, lasting and traveling many times what it was expected to and exceeding even the wildest hopes of the team. That right up until its final day it was capturing beautiful and valuable data is testament to the robustness and care with which it was engineered.


Source: The Tech Crunch

Read More

Google’s new voice recognition system works instantly and offline (if you have a Pixel)

Posted by on Mar 12, 2019 in Apps, Artificial Intelligence, Gadgets, gboard, Google, Mobile, natural language processing, nlp, Speech Recognition | 0 comments

Voice recognition is a standard part of the smartphone package these days, and a corresponding part is the delay while you wait for Siri, Alexa or Google to return your query, either correctly interpreted or horribly mangled. Google’s latest speech recognition works entirely offline, eliminating that delay altogether — though of course mangling is still an option.

The delay occurs because your voice, or some data derived from it anyway, has to travel from your phone to the servers of whoever operates the service, where it is analyzed and sent back a short time later. This can take anywhere from a handful of milliseconds to multiple entire seconds (what a nightmare!), or longer if your packets get lost in the ether.

Why not just do the voice recognition on the device? There’s nothing these companies would like more, but turning voice into text on the order of milliseconds takes quite a bit of computing power. It’s not just about hearing a sound and writing a word — understanding what someone is saying word by word involves a whole lot of context about language and intention.

Your phone could do it, for sure, but it wouldn’t be much faster than sending it off to the cloud, and it would eat up your battery. But steady advancements in the field have made it plausible to do so, and Google’s latest product makes it available to anyone with a Pixel.

Google’s work on the topic, documented in a paper here, built on previous advances to create a model small and efficient enough to fit on a phone (it’s 80 megabytes, if you’re curious), but capable of hearing and transcribing speech as you say it. No need to wait until you’ve finished a sentence to think whether you meant “their” or “there” — it figures it out on the fly.

So what’s the catch? Well, it only works in Gboard, Google’s keyboard app, and it only works on Pixels, and it only works in American English. So in a way this is just kind of a stress test for the real thing.

“Given the trends in the industry, with the convergence of specialized hardware and algorithmic improvements, we are hopeful that the techniques presented here can soon be adopted in more languages and across broader domains of application,” writes Google, as if it is the trends that need to do the hard work of localization.

Making speech recognition more responsive, and to have it work offline, is a nice development. But it’s sort of funny considering hardly any of Google’s other products work offline. Are you going to dictate into a shared document while you’re offline? Write an email? Ask for a conversion between liters and cups? You’re going to need a connection for that! Of course this will also be better on slow and spotty connections, but you have to admit it’s a little ironic.


Source: The Tech Crunch

Read More

Leica’s Q2 is a beautiful camera that I want and will never have

Posted by on Mar 8, 2019 in cameras, Gadgets, Hardware, leica, leica q2, Photography | 0 comments

Leica is a brand I respect and appreciate but don’t support. Or rather, can’t, because I’m not fabulously rich. But if I did have $5,000 to spend on a fixed-lens camera, I’d probably get the new Q2, a significant improvement over 2015’s Q — which tempted me back then.

The Q2 keeps much of what made the Q great: a full-frame sensor, a fabulous 28mm F/1.7 Summilux lens, and straightforward operation focused on getting the shot. But it also makes some major changes that make the Q2 a far more competitive camera.

The sensor has jumped from 24 to 47 megapixels, and while we’re well out of the megapixel race, that creates the opportunity for a very useful cropped shooting mode that lets you shoot at 35, 50, and 75mm equivalents while still capturing huge pixel counts. It keeps the full frame exposure as well so you can tweak the crop later. The new sensor also has a super low native ISO of 50, which should help with dynamic range and in certain exposure conditions.

Autofocus has been redone as well (as you might expect with a new sensor) and it should be quicker and more accurate now. Ther’s also an optical stabilization mode that kicks in when you are shooting at under 1/60s. Both features that need a little testing to verify they’re as good as they sound, but I don’t expect they’re fraudulent or anything.

The body, already a handsome minimal design in keeping with Leica’s impeccable (if expensive) taste, is now weather sealed, making this a viable walk-around camera in all conditions. Imagine paying five grand for a camera and being afraid to take it out in the rain! Well, many people did that and perhaps will feel foolish now that the Q2 has arrived.

Inside is an electronic viewfinder, but the 2015 Q had a sequential-field display — meaning it flashes rapidly through the red, green, and blue components of the image — which made it prone to color artifacts in high-motion scenes or when panning. The Q2, however, has a shiny new OLED display with the same resolution but better performance. OLEDs are great for EVFs for a lot of reasons, but I like that you get really nice blacks, like in an optical viewfinder.

The button layout has been simplified as well (or rather synchronized with the CL, another Leica model), with a new customizable button on the top plate, reflecting the trend of personalization we’ve seen in high-end cameras. A considerably larger battery and redesigned battery and card door rounds out the new features.

As DPReview points out in its hands-on preview of the camera, the Q2 is significantly heavier than the high-end fixed-lens competition (namely the Sony RX1R II and Fuji X100F, both excellent cameras), and also significantly more expensive. But unlike many Leica offerings, it actually outperforms them in important ways: the lens, the weather sealing, the burst speed — it may be expensive, but you actually get something for your money. That can’t always be said of this brand.

The Leica Q2 typifies the type of camera I’d like to own: no real accessories, nothing to swap in or out, great image quality and straightforward operation. I’m far more likely to get an X100F (and even then it’d be a huge splurge) but all that time I’ll be looking at the Q2 with envious eyes. Maybe I’ll get to touch one some day.


Source: The Tech Crunch

Read More

SpaceX’s Crew Dragon makes its first orbital launch tonight

Posted by on Mar 1, 2019 in commercial crew, crew dragon, Gadgets, Government, Hardware, NASA, Space, SpaceX | 0 comments

After years of development and delays, SpaceX’s Crew Dragon is ready to launch into orbit. It’s the first commercially built and operated crewed spacecraft ever to do so, and represents in many ways the public-private partnership that could define the future of spaceflight.

Launch is set for just before midnight Pacific time — 2:49 Eastern time in Cape Canaveral, from where the Falcon 9 carrying the Crew Dragon capsule will take off. It’s using Launchpad 39A at Kennedy Space Center, which previously hosted Apollo missions and more recently SpaceX’s momentous Falcon Heavy launch. Feel free to relive that moment with us, while you’re here:

The capsule has been the work of many years and billions of dollars: an adaptation of the company’s Dragon capsule, but with much of its cargo space converted to a spacious crew compartment. It can seat seven if necessary, but given the actual needs of the International Space Station, it is more likely to carry two or three people and a load of supplies.

Of course it had to meet extremely stringent safety requirements, with an emergency escape system, redundant thrusters and parachutes, newly designed spacesuits, more intuitive and modern control methods and so on.

Crew Dragon interior, with “Ripley”

It’s a huge technological jump over the Russian Soyuz capsule that has been the only method to get humans to space for the last eight years, since the Shuttle program was grounded for good. But one thing Dragon doesn’t have is the Soyuz’s exemplary flight record. The latter may look like an aircraft cockpit shrunk down to induce claustrophobia, but it has proven itself over and over for decades. The shock produced by a recent aborted launch and the quickness with which the Soyuz resumed service are testament to the confidence it has engendered in its users.

But for a number of reasons the U.S. can’t stay beholden to Russia for access to space, and at any rate the commercial spaceflight companies were going to send people up there anyway. So NASA dedicated a major portion of its budget to funding a new crew capsule, pitting SpaceX and Boeing against one another.

SpaceX has had the best of Boeing for the most part, progressing through numerous tests and milestones, not exactly quickly, but with fewer delays than its competitor. Test flights originally scheduled for 2016 are only just now beginning to take place. Boeing’s Starliner doesn’t have a launch date yet, but it’s expected to be this summer.

Tonight’s test (“Demo-1”) is the first time the Crew Dragon will fly to space; suborbital flights and landing tests have already taken place, but this is a dry run of the real thing. Well, not completely dry: the capsule is carrying 400 pounds of supplies to the station and will return with some science experiments on board.

After launch, it should take about 11 minutes for the capsule to detach from the first and second stages of the Falcon 9 rocket. It docks about 27 hours later, early Sunday morning, and the crew will be able to get at the goodies just in time for brunch, if for some reason they’re operating on East Coast time.

SpaceX will be live streaming the launch as usual starting shortly before takeoff; you can watch it right here:


Source: The Tech Crunch

Read More

Amazon stops selling stick-on Dash buttons

Posted by on Mar 1, 2019 in Amazon, amazon dash, api, button, connected objects, Dash, dash button, Dash Replenishment, E-Commerce, eCommerce, Gadgets, Germany, Internet of things, IoT, voice assistant | 0 comments

Amazon has confirmed it’s retired physical stick-on Dash buttons from sale — in favor of virtual alternatives that let Prime Members tap a digital button to reorder a staple product.

It also points to its Dash Replenishment service — which offers an API for device makers wanting to build Internet connected appliances that can automatically reorder the products they need to function — be it cat food, batteries or washing power — as another reason why physical Dash buttons, which launched back in 2015 (costing $5 a pop), are past their sell by date.

Amazon says “hundreds” of IoT devices capable of self-ordering on Amazon have been launched globally to date by brands including Beko, Epson, illy, Samsung and Whirlpool, to name a few.

So why press a physical button when a digital one will do? Or, indeed, why not do away with the need to push a button all and just let your gadgets rack up your grocery bill all by themselves while you get on with the importance business of consuming all the stuff they’re ordering?

You can see where Amazon wants to get to with its “so customers don’t have to think at all about restocking” line. Consumption that entirely removes the consumer’s decision making process from the transactional loop is quite the capitalist wet dream. Though the company does need to be careful about consumer protection rules as it seeks to excise friction from the buying process.

The ecommerce behemoth also claims customers are “increasingly” using its Alexa voice assistant to reorder staples, such as via the Alexa Shopping voice shopping app (Amazon calls it ‘hands free shopping’) that lets people inform the machine about a purchase intent and it will suggest items to buy based on their Amazon order history.

Albeit, it offers no actual usage metrics for Alexa Shopping. So that’s meaningless PR.

A less flashy but perhaps more popular option than ‘hands free shopping’, which Amazon also says has contributed to making physical Dash buttons redundant, is its Subscribe & Save program.

This “lets customers automatically receive their favourite items every month”, as Amazon puts it. It offers an added incentive of discounts that kick in if the user signs up to buy five or more products per month. But the mainstay of the sales pitch is convenience with Amazon touting time saved by subscribing to ‘essentials’ — and time saved from compiling boring shopping lists once again means more time to consume the stuff being bought on Amazon…

In a statement about retiring physical Dash buttons from global sale on February 28, Amazon also confirmed it will continue to support existing Dash owners — presumably until their buttons wear down to the bare circuit board from repeat use.

“Existing Dash Button customers can continue to use their Dash Button devices,” it writes. “We look forward to continuing support for our customers’ shopping needs, including growing our Dash Replenishment product line-up and expanding availability of virtual Dash Buttons.”

So farewell then clunky Dash buttons. Another physical push-button bites the dust. Though plastic-y Dash were quite unlike the classic iPhone home button — always seeming temporary and experimental rather than slick and coolly reassuring. Even so, the end of both buttons points to the need for tech businesses to tool up for the next wave of contextually savvy connected devices. More smarts, and more controllable smarts is key.

Amazon’s statement about ‘shifting focus’ for Dash does not mention potential legal risks around the buttons related to consumer rights challenges — but that’s another angle here.

In January a court in Germany ruled Dash buttons breached local ecommerce rules, following a challenge by a regional consumer watchdog that raised concerns about T&Cs which allow Amazon to substitute a product of a higher price or even a different product entirely than what the consumer had originally selected. The watchdog argued consumers should be provided with more information about price and product before taking the order — and the judges agreed. Though Amazon said it would seek to appeal.

While it’s not clear whether or not that legal challenge contributed to Amazon’s decision to shutter Dash, it’s clear that virtual Dash buttons offer more opportunities for displaying additional information prior to a purchase than a screen-less physical Dash button. So are more easily adaptable to any tightening legal requirements across different markets.

The demise of the physical Dash was reported earlier by CNET.


Source: The Tech Crunch

Read More