Pages Navigation Menu

The blog of DataDiggers

Categories Navigation Menu

Startups Weekly: Will the real unicorns please stand up?

Posted by on Jun 1, 2019 in Aileen Lee, alex wilhelm, bluevoyant, Co-founder, CRM, crowdstrike, cybersecurity startup, dashlane, economy, editor-in-chief, entrepreneurship, eric lefkofsky, Finance, garry tan, Indonesia, initialized capital, money, neologisms, Pegasus, Private Equity, records, SoFi, Softbank, Southeast Asia, starbucks, Startup company, Startups, startups weekly, stewart butterfield, tiny speck, unicorn, valuation, Venture Capital, virtual reality | 0 comments

Hello and welcome back to Startups Weekly, a newsletter published every Saturday that dives into the week’s noteworthy venture capital deals, funds and trends. Before I dive into this week’s topic, let’s catch up a bit. Last week, I wrote about the sudden uptick in beverage startup rounds. Before that, I noted an alternative to venture capital fundraising called revenue-based financing. Remember, you can send me tips, suggestions and feedback to kate.clark@techcrunch.com or on Twitter @KateClarkTweets.

Here’s what I’ve been thinking about this week: Unicorn scarcity, or lack thereof. I’ve written about this concept before, as has my Equity co-host, Crunchbase News editor-in-chief Alex Wilhelm. I apologize if the two of us are broken records, but I think we’re equally perplexed by the pace at which companies are garnering $1 billion valuations.

Here’s the latest data, according to Crunchbase: “2018 outstripped all previous years in terms of the number of unicorns created and venture dollars invested. Indeed, 151 new unicorns joined the list in 2018 (compared to 96 in 2017), and investors poured more than $135 billion into those companies, a 52% increase year-over-year and the biggest sum invested in unicorns in any one year since unicorns became a thing.”

2019 has already coined 42 new unicorns, like Glossier, Calm and Hims, a number that grows each and every week. For context, a total of 19 companies joined the unicorn club in 2013 when Aileen Lee, an established investor, coined the term. Today, there are some 450 companies around the globe that qualify as unicorns, representing a cumulative valuation of $1.6 trillion. 😲

We’ve clung to this fantastical terminology for so many years because it helps us classify startups, singling out those that boast valuations so high, they’ve gained entry to a special, elite club. In 2019, however, $100 million-plus rounds are the norm and billion-dollar-plus funds are standard. Unicorns aren’t rare anymore; it’s time to rethink the unicorn framework.

Last week, I suggested we only refer to profitable companies with a valuation larger than $1 billion as unicorns. Understandably, not everyone was too keen on that idea. Why? Because startups in different sectors face barriers of varying proportions. A SaaS company, for example, is likely to achieve profitability a lot quicker than a moonshot bet on autonomous vehicles or virtual reality. Refusing startups that aren’t yet profitable access to the unicorn club would unfairly favor certain industries.

So what can we do? Perhaps we increase the valuation minimum necessary to be called a unicorn to $10 billion? Initialized Capital’s Garry Tan’s idea was to require a startup have 50% annual growth to be considered a unicorn, though that would be near-impossible to get them to disclose…

While I’m here, let me share a few of the other eclectic responses I received following the above tweet. Joseph Flaherty said we should call profitable billion-dollar companies Pegasus “since [they’ve] taken flight.” Reagan Pollack thinks profitable startups oughta be referred to as leprechauns. Hmmmm.

The suggestions didn’t stop there. Though I’m not so sure adopting monikers like Pegasus and leprechaun will really solve the unicorn overpopulation problem. Let me know what you think. Onto other news.

Image by Rafael Henrique/SOPA Images/LightRocket via Getty Images

IPO corner

CrowdStrike has set its IPO terms. The company has inked plans to sell 18 million shares at between $19 and $23 apiece. At a midpoint price, CrowdStrike will raise $378 million at a valuation north of $4 billion.

Slack inches closer to direct listing. The company released updated first-quarter financials on Friday, posting revenues of $134.8 million on losses of $31.8 million. That represents a 67% increase in revenues from the same period last year when the company lost $24.8 million on $80.9 million in revenue.

Startup Capital

Online lender SoFi has quietly raised $500M led by Qatar
Groupon co-founder Eric Lefkofsky just-raised another $200M for his new company Tempus
Less than 1 year after launching, Brex eyes $2B valuation
Password manager Dashlane raises $110M Series D
Enterprise cybersecurity startup BlueVoyant raises $82.5M at a $430M valuation
Talkspace picks up $50M Series D
TaniGroup raises $10M to help Indonesia’s farmers grow
Stripe and Precursor lead $4.5M seed into media CRM startup Pico

Funds

Maveron, a venture capital fund co-founded by Starbucks mastermind Howard Schultz, has closed on another $180 million to invest in early-stage consumer startups. The capital represents the firm’s seventh fundraise and largest since 2000. To keep the fund from reaching mammoth proportions, the firm’s general partners said they turned away more than $70 million amid high demand for the effort. There’s more where that came from, here’s a quick look at the other VCs to announce funds this week:

~Extra Crunch~

This week, I penned a deep dive on Slack, formerly known as Tiny Speck, for our premium subscription service Extra Crunch. The story kicks off in 2009 when Stewart Butterfield began building a startup called Tiny Speck that would later come out with Glitch, an online game that was neither fun nor successful. The story ends in 2019, weeks before Slack is set to begin trading on the NYSE. Come for the history lesson, stay for the investor drama. Here are the other standout EC pieces of the week.

Equity

If you enjoy this newsletter, be sure to check out TechCrunch’s venture-focused podcast, Equity. In this week’s episode, available here, Crunchbase News editor-in-chief Alex Wilhelm and I debate whether the tech press is too negative or too positive in its coverage of tech startups. Plus, we dive into Brex’s upcoming round, SoFi’s massive raise and CrowdStrike’s imminent IPO.


Source: The Tech Crunch

Read More

Reality Check: The marvel of computer vision technology in today’s camera-based AR systems

Posted by on May 15, 2019 in Animation, AR, ar/vr, Artificial Intelligence, Augmented Reality, Column, Computer Vision, computing, Developer, digital media, Gaming, gif, Global Positioning System, gps, mobile phones, neural network, starbucks, TC, virtual reality, VR | 0 comments

British science fiction writer, Sir Arther C. Clark, once said, “Any sufficiently advanced technology is indistinguishable from magic.”

Augmented reality has the potential to instill awe and wonder in us just as magic would. For the very first time in the history of computing, we now have the ability to blur the line between the physical world and the virtual world. AR promises to bring forth the dawn of a new creative economy, where digital media can be brought to life and given the ability to interact with the real world.

AR experiences can seem magical but what exactly is happening behind the curtain? To answer this, we must look at the three basic foundations of a camera-based AR system like our smartphone.

  1. How do computers know where it is in the world? (Localization + Mapping)
  2. How do computers understand what the world looks like? (Geometry)
  3. How do computers understand the world as we do? (Semantics)

Part 1: How do computers know where it is in the world? (Localization)

Mars Rover Curiosity taking a selfie on Mars. Source: https://www.nasa.gov/jpl/msl/pia19808/looking-up-at-mars-rover-curiosity-in-buckskin-selfie/

When NASA scientists put the rover onto Mars, they needed a way for the robot to navigate itself on a different planet without the use of a global positioning system (GPS). They came up with a technique called Visual Inertial Odometry (VIO) to track the rover’s movement over time without GPS. This is the same technique that our smartphones use to track their spatial position and orientation.

A VIO system is made out of two parts.


Source: The Tech Crunch

Read More

With new raise, Unity could nearly double valuation to $6 billion

Posted by on May 9, 2019 in Augmented Reality, Gaming, John Riccitiello, TC, unity-technologies, virtual reality | 0 comments

Unity Technologies, the company behind one of the world’s most popular game engines, could nearly double its reported valuation in a new round of funding.

The company has filed to raise up to $125 million in Series E funding according to a Delaware stock authorization filing uncovered by Prime Unicorn Index and reviewed by TechCrunch. If Unity closes the full authorized raise it will hold a valuation of $5.96 billion.

A Unity spokesperson confirmed the details of the document.

The SF company builds developer tools that allow game-makers to build titles and deploy them on consoles, mobile and PC. More than half of all new games are built using the platform. Customers pay for the platform per developer once their projects reach a certain scale.

Unity’s competitors include Fortnite-maker Epic Games, which has been able to rapidly acquire startups and game studios in the past two years fueled by the profits of their blockbuster hit.

Unity most recently closed $400 million in Series D funding led by Silver Lake, a “big chunk” of which went toward purchasing the shares of longtime employees and earlier investors. The round left the company’s valuation north of $3 billion. The company, founded in 2003, has raised more than $600 million to date.

The company’s previous backers include Sequoia, DFJ Growth and Silver Lake Partners.

Earlier this year, Cheddar reported that Unity was eyeing a 2020 IPO, though the company did not comment on the report.


Source: The Tech Crunch

Read More

Get ready for a new era of personalized entertainment

Posted by on Apr 13, 2019 in Amazon, Artificial Intelligence, Column, computing, Content, Facebook, machine learning, Marketing, Multimedia, personalization, smart devices, Spotify, Streaming Media, streaming services, Twitter, virtual reality, world wide web | 0 comments

New machine learning technologies, user interfaces and automated content creation techniques are going to expand the personalization of storytelling beyond algorithmically generated news feeds and content recommendation.

The next wave will be software-generated narratives that are tailored to the tastes and sentiments of a consumer.

Concretely, it means that your digital footprint, personal preferences and context unlock alternative features in the content itself, be it a news article, live video or a hit series on your streaming service.

The title contains different experiences for different people.

From smart recommendations to smarter content

When you use Youtube, Facebook, Google, Amazon, Twitter, Netflix or Spotify, algorithms select what gets recommended to you. The current mainstream services and their user interfaces and recommendation engines have been optimized to serve you content you might be interested in.

Your data, other people’s data, content-related data and machine learning methods are used to match people and content, thus improving the relevance of content recommendations and efficiency of content distribution.

However, so far the content experience itself has mostly been similar to everyone. If the same news article, live video or TV series episode gets recommended to you and me, we both read and watch the same thing, experiencing the same content.

That’s about to change. Soon we’ll be seeing new forms of smart content, in which user interface, machine learning technologies and content itself are combined in a seamless manner to create a personalized content experience.

What is smart content?

Smart content means that content experience itself is affected by who is seeing, watching, reading or listening to content. The content itself changes based on who you are.

We are already seeing the first forerunners in this space. TikTok’s whole content experience is driven by very short videos, audiovisual content sequences if you will, ordered and woven together by algorithms. Every user sees a different, personalized, “whole” based on her viewing history and user profile.

At the same time, Netflix has recently started testing new forms of interactive content (TV series episodes, e.g. Black Mirror: Bandersnatch) in which user’s own choices affect directly the content experience, including dialogue and storyline. And more is on its way. With Love, Death & Robots series, Netflix is experimenting with episode order within a series, serving the episodes in different order for different users.

Some earlier predecessors of interactive audio-visual content include sports event streaming, in which the user can decide which particular stream she follows and how she interacts with the live content, for example rewinding the stream and spotting the key moments based on her own interest.

Simultaneously, we’re seeing how machine learning technologies can be used to create photo-like images of imaginary people, creatures and places. Current systems can recreate and alter entire videos, for example by changing the style, scenery, lighting, environment or central character’s face. Additionally, AI solutions are able to generate music in different genres.

Now, imagine, that TikTok’s individual short videos would be automatically personalized by the effects chosen by an AI system, and thus the whole video would be customized for you. Or that the choices in the Netflix’s interactive content affecting the plot twists, dialogue and even soundtrack, were made automatically by algorithms based on your profile.

Personalized smart content is coming to news as well. Automated systems, using today’s state-of-the-art NLP technologies, can generate long pieces of concise, comprehensible and even inventive textual content at scale. At present, media houses use automated content creation systems, or “robot journalists”, to create news material varying from complete articles to audio-visual clips and visualizations. Through content atomization (breaking content into small modular chunks of information) and machine learning, content production can be increased massively to support smart content creation.

Say that a news article you read or listen to is about a specific political topic that is unfamiliar to you. When comparing the same article with your friend, your version of the story might use different concepts and offer a different angle than your friend’s who’s really deep into politics. A beginner’s smart content news experience would differ from the experience of a topic enthusiast.

Content itself will become a software-like fluid and personalized experience, where your digital footprint and preferences affect not just how the content is recommended and served to you, but what the content actually contains.

Automated storytelling?

How is it possible to create smart content that contains different experiences for different people?

Content needs to be thought and treated as an iterative and configurable process rather than a ready-made static whole that is finished when it has been published in the distribution pipeline.

Importantly, the core building blocks of the content experience change: smart content consists of atomized modular elements that can be modified, updated, remixed, replaced, omitted and activated based on varying rules. In addition, content modules that have been made in the past, can be reused if applicable. Content is designed and developed more like a software.

Currently a significant amount of human effort and computing resources are used to prepare content for machine-powered content distribution and recommendation systems, varying from smart news apps to on-demand streaming services. With smart content, the content creation and its preparation for publication and distribution channels wouldn’t be separate processes. Instead, metadata and other invisible features that describe and define the content are an integral part of the content creation process from the very beginning.

Turning Donald Glover into Jay Gatsby

With smart content, the narrative or image itself becomes an integral part of an iterative feedback loop, in which the user’s actions, emotions and other signals as well as the visible and invisible features of the content itself affect the whole content consumption cycle from the content creation and recommendation to the content experience. With smart content features, a news article or a movie activates different elements of the content for different people.

It’s very likely that smart content for entertainment purposes will have different features and functions than news media content. Moreover, people expect frictionless and effortless content experience and thus smart content experience differs from games. Smart content doesn’t necessarily require direct actions from the user. If the person wants, the content personalization happens proactively and automatically, without explicit user interaction.

Creating smart content requires both human curation and machine intelligence. Humans focus on things that require creativity and deep analysis while AI systems generate, assemble and iterate the content that becomes dynamic and adaptive just like software.

Sustainable smart content

Smart content has different configurations and representations for different users, user interfaces, devices, languages and environments. The same piece of content contains elements that can be accessed through voice user interface or presented in augmented reality applications. Or the whole content expands into a fully immersive virtual reality experience.

In the same way as with the personalized user interfaces and smart devices, smart content can be used for good and bad. It can be used to enlighten and empower, as well as to trick and mislead. Thus it’s critical, that human-centered approach and sustainable values are built in the very core of smart content creation. Personalization needs to be transparent and the user needs to be able to choose if she wants the content to be personalized or not. And of course, not all content will be smart in the same way, if at all.

If used in a sustainable manner, smart content can break filter bubbles and echo chambers as it can be used to make a wide variety of information more accessible for diverse audiences. Through personalization, challenging topics can be presented to people according to their abilities and preferences, regardless of their background or level of education. For example a beginner’s version of vaccination content or digital media literacy article uses gamification elements, and the more experienced user gets directly a thorough fact-packed account of the recent developments and research results.

Smart content is also aligned with the efforts against today’s information operations such as fake news and its different forms such as “deep fakes” (http://www.niemanlab.org/2018/11/how-the-wall-street-journal-is-preparing-its-journalists-to-detect-deepfakes). If the content is like software, a legit software runs on your devices and interfaces without a problem. On the other hand, even the machine-generated realistic-looking but suspicious content, like deep fake, can be detected and filtered out based on its signature and other machine readable qualities.


Smart content is the ultimate combination of user experience design, AI technologies and storytelling.

News media should be among the first to start experimenting with smart content. When the intelligent content starts eating the world, one should be creating ones own intelligent content.

The first players that master the smart content, will be among tomorrow’s reigning digital giants. And that’s one of the main reasons why today’s tech titans are going seriously into the content game. Smart content is coming.


Source: The Tech Crunch

Read More

Google is reportedly shutting down its in-house VR film studio

Posted by on Mar 14, 2019 in daydream, Google, virtual reality | 0 comments

Google is shutting down its Emmy Award-winning VR film division, Spotlight Stories, after six years of building out content, Variety reports.

We’ve reached out to Google for confirmation.

“Google Spotlight Stories means storytelling for VR. We are artists and technologists making immersive stories for mobile 360, mobile VR and room-scale VR headsets, and building the innovative tech that makes it possible,” the group’s site reads.

The Spotlight Stories team was part of the company’s Advanced Technologies and Products (ATAP) group. Much like Facebook’s ill-fated Oculus Story Studio, there was never a big focus on monetizing what was being created internally.

The studio’s best-received work, “Pearl,” was nominated for an Academy Award and won an Emmy in 2017. The group also worked with Wes Anderson to bring a VR behind-the-scenes featurette on the making of his film “Isle of Dogs.” In November, the group released its last major work, “Age of Sail,” a narrative film that could be watched on mobile and high-end VR systems.

Google has made significant investments in AR and VR, but has allowed competitors like Facebook and Apple to surpass their consumer efforts.

Google’s efforts on its VR program went full throttle in 2016 and early 2017 while the company sought to keep pace with Samsung which was aggressively hocking mobile hardware it had built alongside Oculus. It’s rumored the company made significant changes to its immersive divisions after Apple introduced ARKit in mid-2017, aggressively shifting resources from its VR division to AR projects like its ARCore mobile augmented reality platform.

The company has not updated its Daydream View VR headset since 2017, the company has ceded most of its ground to Oculus as its allowed products like Lenovo’s Daydream View to die on the shelf as its failed to make updates to its platform or direct significant resources to bringing new content on board. Now, with the reported shutdown of Spotlight Stories, the company is now making it clear that they don’t think building their own content is the right approach either.


Source: The Tech Crunch

Read More

Steam fights for future of game stores and streaming

Posted by on Feb 26, 2019 in Gaming, TC, virtual reality | 0 comments

For more than 15 years, Steam has been the dominant digital distribution platform for PC video games. While its success has spawned several competitors, including some online stores from game publishers, none have made a significant dent in its vice-like grip on the market.

Cracks though are seemingly starting to appear in Steam’s armor, and at least one notable challenger has stepped up, with potentially bigger ones on the horizon. They threaten to make Steam the digital equivalent of GameStop —a once unassailable retail giant whose future became questionable when it didn’t successfully change with the times.

The epic launch of an Epic Store

Photo by Neilson Barnard/Getty Images for Ubisoft

Epic Games has, in a remarkably short period of time, positioned itself as the successor to Steam. In December, the creator of the billion dollar Fortnite franchise announced it was getting into the game retail business with the Epic Games store. Less than two months later, it had landed limited exclusivity deals with two publishers who chose to bypass Steam as they launch upcoming titles.

First up was Ubisoft, which announced the PC version of Tom Clancy’s The Division 2, a highly anticipated action game would be semi-exclusive to the Epic Games store (It will also be available on Ubisoft’s digital storefront). Ubisoft also said that “additional select titles” would be coming to Epic’s store in later months.

“We’re giving game developers and publishers the store business model that we’ve always wanted as developers ourselves,” said Tim Sweeney, founder and CEO of Epic Games. “Ubisoft supports our model and trusts us to deliver a smooth journey for players, from pre-purchase to the game’s release.”

Three weeks later, publisher Deep Silver abruptly discontinued pre-sales of its survival shooter Metro Exodus on Steam and announced the game would be available moving forward solely through the Epic Games store (previous Steam orders will be honored).

Steam’s past success is hitting new blocks

To be clear, Steam is hardly struggling. Last October at Melbourne Games Week, Steam announced it had 90 million monthly active users, compared to 67 million in 2017. Daily active users, it said, had grown from 33 million to 47 million.

Much of that growth came from China, where players are looking to circumvent the government’s crackdown on games. Domestic numbers, though, have been trending down, according to SteamSpy, a third-party tracking service.

Valve Software, which owns Steam, did not reply to requests for comment on this story. It did, however, post a statement on the Metro Exodus Steam page soon after Deep Silver announced its partnership with Epic, saying “We think the decision to remove the game is unfair to Steam customers, especially after a long pre-sale period. We apologize to Steam customers that were expecting it to be available for sale through the February 15th release date, but we were only recently informed of the decision and given limited time to let everyone know.”

So what’s the draw for game makers to sell via Epic Games store? It is, of course, a combination of factors, but chief among those is financial. To convince publishers and developers to utilize their system, Epic only takes a 12% cut of game sale revenues. That’s significantly lower than the 30% taken by Valve on Steam (or the amounts taken by Apple or Google in their app stores).

To woo developers who use its Unreal graphics engine, Epic also waives all royalty fees for sales generated through the store. (Developers who use Unreal in their games typically pay a 5% royalty on all sales.)

The reason for those notably lower commissions, perhaps not surprisingly, ties back to Fortnite.

“While running Fortnite we learned a lot about the cost of running a digital store on PC,” says Sweeney. “The math is simple: we pay around 2.5% for payment processing for major payment methods, less than 1.5% for CDN [content delivery network] costs (assuming all games are updated as often as Fortnite), and between 1% and 2% for variable operating and customer support costs. Because we operate Fortnite on the Epic Games launcher on such a large scale, it has enabled us to build the store, run it at a low cost, and pass those savings onto developers.”

Owning the game customer

Photo by Andy Cross/The Denver Post via Getty Images

Higher commissions are just one of the issues developers and publishers have with Steam. While none were willing to go on the record, for fear of retribution from Valve or because they were not authorized to officially speak on their company’s behalf, the complaints generally echoed each other.


Source: The Tech Crunch

Read More

Robotics, AR and VR are poised to reshape healthcare, starting in the operating room

Posted by on Feb 21, 2019 in Alphabet, Auris Health, Dell, Health, healthcare, initialized capital, Intuitive Surgical, Johnson & Johnson, MarketsandMarkets, medicine, Microsoft, Robotics, Samsung, TC, Vicarious Surgical, VINCI, virtual reality, Vivid Vision, zSpace | 0 comments

About 20 years ago, a medical device startup called Intuitive Surgical debuted the da Vinci robot and changed surgical practices in operating rooms across the United States.

The da Vinci ushered in the first age of robotic-assisted surgical procedures with a promise of greater accuracy and quicker recovery times for patients undergoing certain laparoscopic surgeries. 

For a time, it was largely alone in the market. It has skyrocketed in value since 2000, when the stock first debuted on public markets. From the $46 million that the company initially raised in its public offering to now, with a market capitalization of nearly $63 billion, Intuitive has been at the forefront of robotic-assisted surgeries, but now a new crop of startups is emerging to challenge the company’s dominance.

Backed by hundreds of millions in venture capital dollars, new businesses are coming to refashion operating rooms again — this time using new visualization and display technologies like virtual and augmented reality, and a new class of operating robots. Their vision is to drive down the cost and improve the quality of surgical procedures through automation and robotic equipment.

“There were 900,000 surgeries done using surgical robotics out of a total of 313 million surgical procedures,” globally, says Dror Berman, a managing director of Innovation Endeavors.

Berman is an investor in Vicarious Surgical, a new robotics company that plans to not only improve the cost and efficiency of surgical procedures, but enable them to be performed remotely so the best surgeons can be found to perform operations no matter where in the world they are.

“Robotics and automation present multiple opportunities to improve current processes, from providing scientists the opportunity to vastly increase experimental throughput, to allowing people with disabilities to regain use of their limbs,” Berman wrote in a blog post announcing his firm’s initial investment in Vicarious.

The $3.4 billion acquisition of Auris Health by Johnson & Johnson shows just how lucrative the market for new surgical robotics can be.

That company, founded by one of the progenitors of the surgical robotics industry, Fred Moll, is the first to offer serious competition to Intuitive Surgical’s technological advantage — no wonder, considering Dr. Moll also founded Intuitive Surgical.

Last year, the company unveiled its Monarch platform, which takes an endoscopic approach to surgical procedures that is less invasive and more accurate to test for — and treat — lung cancer.

“A CT scan shows a mass or a lesion,” Dr. Moll said in an interview at the time. “It doesn’t tell you what it is. Then you have to get a piece of lung, and if it’s a small lesion. It isn’t that easy — it can be quite a traumatic procedure. So you’d like to do it in a very systematic and minimally invasive fashion. Currently it’s difficult with manual techniques and 40 percent of the time, there is no diagnosis. This is has been a problem for many years and [inhibits] the ability of a clinician to diagnose and treat early-stage cancer.”

Monarch uses an endoscopy procedure to insert a flexible robot into hard-to-reach places inside the human body. Doctors trained on the system use video game-style controllers to navigate inside, with help from 3D models.


Source: The Tech Crunch

Read More

HTC revamps standalone VR headset to keep pace with Oculus while it looks to big business

Posted by on Feb 21, 2019 in Hardware, HTC, HTC Vive, TC, virtual reality, virtual reality headset, Vive | 0 comments

HTC has had a little bit of a rough ride these past few years. After betting the farm on VR, the company has had to make some substantial business strategy shifts to keep the division kicking in the face of a less-than-robust headset market and a behemoth margin-less competitor that’s alright losing a few billion dollars.

HTC’s latest play, a revamped Vive Focus headset that features tracked motion controllers, could be seen simply as playing catch-up with Oculus and their upcoming Oculus Quest standalone headset, but it’s likely only aiming to keep pace with innovation to dissuade enterprise customers from switching teams.

The Vive Focus Plus maintains a lot of the system specs of the previous generation, but taps some souped-up “visuals” and an interesting new controller tracking system that relies on ultrasonic feedback rather than camera-based optical tracking to locate the controllers in 3D space. The tracking system is a bit peculiar-sounding, but Qualcomm built out support for the tech in its VR reference design headset and the Focus Plus is again powered by the Snapdragon 835 chipset, according to a spec list obtained by Road to VR.

Since launching the HTC Vive in 2016, HTC has gradually shifted its business to enterprise customers looking to outfit their organizations with headsets for training and design visualization purposes. The company has, at times, tried to play both sides, especially with its desktop VR hardware with pricing focused on enterprise customers but marketing aimed at consumers as well. That’s easier, given the Vive Pro’s compatibility with Valve’s SteamVR platform and the associated content, but HTC can’t just wander into a consumer mobile platform in the U.S. without a more concerted push.

No details on a release date or the enterprise pricing. The regular Vive Focus starts at $599.


Source: The Tech Crunch

Read More

This is the best VR headset I’ve ever demoed

Posted by on Feb 19, 2019 in varjo, virtual reality, virtual reality headset | 0 comments

Before Oculus kickstarted a lot of the fervor around consumer headsets, the VR headsets that were being built for enterprise rigs were multi-thousands-dollar rigs that still sucked. As Oculus and HTC expanded their platforms, a lot of these enterprise-focused VR companies shriveled up or were forced to significantly retool how they approached fat-wallet customers.

Things are even more complicated now; Oculus has priced pretty much every other manufacturer out of the consumer market, and now a good deal of those consumer VR companies are chasing enterprise customers. Microsoft has been doing this with its Mixed Reality platform as well, but the customer base really doesn’t seem to be large enough to necessitate 14 hardware competitors.

Varjo has a unique strategy to stand out from competitors — it’s called actual product differentiation.

The Finland-based VR startup’s new VR-1 headset is a bulky solution that runs on SteamVR tracking but the high-resolution sweet spot that delivers a Retina-type display’s worth of pixel-density transforms this into an entirely different type of product. I don’t want to give this team more credit than they deserve, because the technical solution is novel but not mind-bogglingly complex from a hardware point-of-view; nevertheless, this headset delivers a pretty transformative experience.

The headset works by pairing a more conventionally resolutioned VR display with miniature ultra-high-res displays that lens and mirrors reflect to fall in the center of the user’s vision. The company says this sweet spot (which is about the size of the current-gen HoloLens field-of-view) offers about 20x the resolution of other consumer VR headsets out now. There are a few optical quirks with the current setup and it’s a much different setup than the prototype I demoed in 2017.

HTC Vive Pro versus Varjo VR-1 (courtesy of Varjo)

The company is called Varjo, but the company’s first commercial product notably ditches the varifocal lens approach that was one of the hallmarks of early prototypes. Varifocal lenses allow users to focus on different areas of an environment, including things within a few inches of their face, which is impossible on current headsets. Other perks include not having to wear glasses because the lenses can adjust for your prescription. The systems are mechanically operated, which surely has more potential as a failure point than fixed-lens setups. Ultimately by ditching the varifocal approach, Varjo was able to expand the field-of-view of the high-resolution sweet spot with a fixed lens. Given the trade-offs, they seemed to make a wise choice.

The substantial pixel bump also makes it feel like a completely different type of device. It’s insane. Pixels just aren’t visible, so most of the limitations are what’s being rendered. It’s a decidedly premium experience; the VR-1 retails for just under $6,000 or 17 times the price of the Oculus Rift.

The solution Varjo built out stands on its own for now, but the limitations are quickly apparent in terms of where other headsets can surpass the experience. Future hardware will need some type of varifocal approach and will assuredly rely on tech like foveated rendering to determine where full resolution is rendered rather than a fixed high-res reflection. To VR hardware aficionados looking at pushing scalable solutions, I’m sure the VR-1 feels a bit like cheating, but cheating feels good sometimes.

The VR-1 is, again, $5,995, and that price doesn’t even include the controllers or SteamVR tracking sensors. It exists and it’s on sale now for business customers.


Source: The Tech Crunch

Read More

References to next-gen Oculus ‘Rift S’ headset reportedly found in internal code

Posted by on Feb 6, 2019 in Oculus, virtual reality, virtual reality headset | 0 comments

After we broke the news in November that Oculus’s former CEO had left the company partially due to disagreements on the company’s PC hardware direction, including the cancellation of a high-end “Rift 2” in favor of a more iterative “Rift S” headset, new details are emerging that confirm Facebook’s directional shift for its flagship headset.

User interface code discovered by UploadVR seems to confirm that Oculus is actively readying their software for the “Rift S” hardware. Details, including the “Rift S” name and the fact that the headset will be powered by onboard cameras rather than external sensors, were apparent from code that referenced options to change settings on the “Rift S cameras.” The report also details that the displays could function differently and utilize a software-based approach for adjusting for the distance between a user’s eyes.

We’ve reached out to Facebook for comment.

Our initial report detailed that Oculus would be abandoning its external sensor system and relying on the onboard tracking camera setup from the Oculus Quest headset. Additionally we shared details that the device’s display resolution would be getting a small bump.

We’ve been told that the Rift S headset will launch this year. Last year, the Oculus Go headset was revealed at Facebook’s F8 conference; will we see a similar unveil in a few months for both the Rift S and Oculus Quest, or will the social networking giant space out its VR hardware launches a bit?


Source: The Tech Crunch

Read More