Categories
Augmented Reality Policy, Legal, Regulatory Social and Societal

Where is the line?

If, when using the camera and microphone of my future AR-assisted display, I were to record an experience including a conversation, where is the line between what is yours and what is mine? Or if I had my personal cultural assistant operating while visiting a gallery or my digital shopping assistant in a store. What parts of the data belong to the museum or the retail merchant?

In the past 6 months there's been a lot of attention paid in the media to subjects of personal privacy (and personal data protection) and the use of Project Glass. No, Glass isn't AR. But Glass has enough in common with products that do (and will) provide AR features that we can use this post by Robert Scoble as an example of what the dialog is and will be around.

We've probably all read articles like this one and had discussions with people about how to manage personal privacy when sensors are always (or almost always) on. Scoble is of the opinion that privacy advocates sounding the alarm now will look like fools in a year's time.

I don't think it's that simple. Sure, there are already laws to protect personal privacy. In California, it's illegal to record another person's voice without consent or to capture video or photos where there is an expectation of privacy. I'm told that there will be ways address concerns with on-device (on display) software and/or hardware (e.g., Glass' projector light is on when you are recording, a flap over your camera). I'd like to learn about approaches to address and control privacy that might be more discreet and more reliable, or that would work without relying on the manufacturer of the device, perhaps using cloud-based services.

These still don't get to the question of the law. I'm going to be giving a presentation to an association of Law Librarians tomorrow during which I will raise this question of capture in public spaces. Although the managers of law libraries don't themselves argue cases of this type, they might know of precedent. I look forward to an interesting dialog about the direction of change and how those who manage public spaces will address this delicate subject in the future.

When there is precedent in the public parts of our lives, we will all be wiser about where to draw the line in our personal spaces.

Categories
Augmented Reality Social and Societal

The Contextually-Aware Age

I am writing a position paper about how position (ironically) can reduce the amount of data sent to a mobile device and make what is sent more likely to be of interest to the user. Since the paper will be distributed as part of a campaign to promote an OGC event about mobile standards later this month, I’m using the phrase “Next Generation Location-based Services” to describe this, but I’m exploring other closely-related ideas and topics.

A lot of people, especially academics (here’s an example), have developed, as far back as a decade ago, some excellent thinking around such services and there was a very comprehensive CSC Grant report about NGLBS published in 2010 on the subject.

I was watching the recently-posted video by Robert Scoble on the new real-time information feed that TagWhat released the other day. About 8 minutes into the video, I heard that with Forbes writer Shel Israel, Robert is co-writing a book about contextually-aware services called the Age of Context. I look forward to reading it.

The Scobleizer tweet stream is packed with excellent examples of how mobile sensors on devices are going to impact the services we use. So, while the concept of Next Generation LBS isn’t new and it really hasn’t become common (yet), it’s fair to say that it is trendy.

What Scoble hasn’t discussed, at least in what I’ve read of his writing on the topic to date, is the problem of proprietary APIs.

Scoble is a huge fan of social media, especially Google+. On the day that I was watching the TagWhat video interview, I also learned that Robert was going to be using the new Smith Optics Elite Division ski goggles with Recon Instruments technology. I haven’t read Robert’s review of these yet but I hope he does a special about Augmented Reality when skiing.

In the social media “universe” there are few, if any, open standards. Unfortunately, many of the services to which the applications are sending requests for contextually-relevant information are social media services, using proprietary interfaces. Application developers must either develop their own data and limit their users’ access to other appropriate data sets, or write to and use the unique features of each service interface in order to relay the data to their servers, where they can filter the data as needed by the user and according the preferences and context settings of the mobile client application.

Lack of standards for contextually-relevant data places a heavy burden on developers. In contrast, if the burden on developers can be reduced significantly with the use of open data, linked data and, above all, open Web-based services, more developers will use more diverse data sets in their services and the era of contextually-aware mobile services will blossom.

My position paper describes how some standards already exist to address the problem. The Open GeoSpatial Consortium provides the most widely-implemented set of open standards for geospatially-referenced information for use in the development of contextually-aware enterprise and consumer services. I’m looking forward to learning more about these during the upcoming Next Generation LBS event the OGC is holding on February 27. I hope to meet many of you there!

Categories
2020 Augmented Reality

Viewing Small Parts of Big Data

2013 is only 32 hours "old." Can I view how my place on the planet has evolved in those hours? No.

I can't see how many people went skiing on the mountain at which I'm looking or what the temperature was (or is) as I walk along the edge of Lake Geneva. But the information is "there." The data to answer these questions are somewhere, already part of Big Data. Dawn of 2013 on Lake Geneva

It could be displayed on my laptop screen but I'm convinced it would be more valuable to see the same data on and in the world. It's getting easier with cloud-based tools to make small, select data sets available in AR view, but we are far from being surrounded in this data.

One of the issues stems from needing a human, a developer, to associate an action with every published data set. Click on the digital button to go to the coupon. Click on the flag to see the height of the mountain.

What if some data were just data? Could we, without associating an action to them, have data automatically made visible?

The high cost and effort still required to make "just data" about a user's context and surroundings accessible on the fly are major impediments to our being able to use the digital universe more fully. Overcoming these obstacles will require open interfaces, many of which could be defined by standards, upon which new services will be offered.

But there also must be deeper thought and research into how we are presented with data, what it looks like, and the dimensions of the data provided to us. What is the appropriate resolution? How do I adjust this?

These questions are not trivial. I'm disappointed that the new IDC study on Big Data in 2020 doesn't go into how we will visualize Big Data in the future. I hope a future study will examine how small a data set is still valuable to different tasks, different modes of a professional or consumer user. This research could help us better understand the Big Data opportunities as well as helping us better quantify the value of Augmented Reality.

Let's hope that we will see this topic raised by others (with the data to define it better) before the end of this young year!

Categories
Augmented Reality Business Strategy

Get Quantitative

Models developed by industry analysts frequently predict astronomically large growth for Augmented Reality. A good report has to have a lot of illustrative examples, describing the richness that AR is capable of delivering and forecasting how this will evolve over the years in the forecast period.

I don't intend to publish a full market research report about AR because, at the speed the business is evolving, it will be out-of-date before it is completed. This is primarily because market and trend forecasting aren't the only services I provide and my projects don't all need the quantitative proof to be successful.  That said, regularly and reliably measuring the growth of AR and sharing those metrics with the world are very much on my list of goals for the near future.

In May 2012 I began the process of getting quantitative about mobile AR by inviting a half-dozen of my friends in AR companies to share a little of what they are doing. In a survey I asked questions about:

  • What AR metrics are available?
  • When did you begin capturing mobile AR metrics?
  • How are these metrics captured?
  • Who uses (what is the purpose of gathering) these metrics?
  • Who has access to them?
  • How long are they stored?
  • What are the conditions or agreements with the users or content publishers?

I learned that there is no consistency in how metrics are collected or used in 2012. Maybe that's good news because it permits us to develop new methods and avoid having to re-engineer systems.

In preparation for a new campaign on mobile AR metrics, their importance and methods for acquiring and comparing them, I've been gathering my own examples of how metrics are gathered and communicated.

I like the infographics approach. The one I provide here was issued by Blippar about the campaign they did with Shortlist magazine but I learned about it when it was published in this post by Onno Hansen the author of the IDentifEYE blog about AR and Education.

The fact that nearly 10% of the audited circulation (over 51K out of 529K) actually used the application/features, is not all that surprising when you understand that the target audience of this publication, "high class city-dwelling men," is highly technology-centric. They also like a good deal. The weekly print media publication is distributed free at newsstands in UK.

One more indicator that these guys like a good deal is click through rate of 13.4%. This is phenomenal for a magazine, I think. If you have other examples of high CTR for print media, I'd love to hear about them.

Categories
Augmented Reality Business Strategy Standards

Three Giants (and the Hobbit)

In Greek Mythology, the Hekatonkheires were children of Gaia (Earth) and Uranus (sky). They were three incredibly strong giants, each had 100 hands and 50 heads, and their ferocity surpassed that of all the Titans.

In today's information universe, at least in this version of the story, they are reincarnated and will have the option of doing good, defeating evil and helping the planet to become a better, richer place for human beings. There are many beautiful analogies between Greek Mythology and my current thinking, but I will limit myself to only the Giants. The three can be none other than Apple, Microsoft and Google.

For years we've heard rumors and seen signs in the patent filings (first in Feb 2010, then here, and more recently, here and here) that at least a few people within Apple are working on displays that would provide users the ability to view 3D without glasses or to wear displays on their heads at eye level, otherwise known (in my lingo) as "hands-free displays." Of course, these could be useful for a variety of applications not limited to Augmented Reality, but depending on their features, and how/when the content provided to them appeared, we can assume that these devices would (will) at least be suitable for certain mobile AR applications.

A year ago, when rumors began to circulate about Google's now widely acknowledged and publicized eyewear project (Project Glass), my anxiety about Apple coming out with a new product that would once again transform the form factor of mobile computing (a good thing), as well as further (bad news) its closed and proprietary, and very valuable, useful developer ecosystem, lessened.

At least the end users, I told myself, will have a choice: an Android alternative to iOS-based hands-free displays and, instead of one proprietary platform controlling the universe, we will have two. Google will provide Android-friendly and Android-based eyewear for content and experiences that are published in what will probably begin as a Google proprietary format, while Apple will provide different (or some similar) content and experiences published only in its proprietary format.

At least with two Giants there's no doubt there will be a war! My fear remained that one of the two would win and we would not benefit from the future of open and interoperable AR content and experiences, based on open interfaces, standards and wild flashes of innovation emerging out of nowhere but catching on (one can suppose this will happen more easily in open ecosystems that in closed ones).

My hopes were boosted when on November 13 Vuzix finally disclosed its answer to ProjectGlass, the M100. The announcement that the M100 won the CES 2013 Design and Engineering award (in the Wireless Handset Accessory Category) got picked up by some big bloggers here, here and here as well as hundreds of smaller readership sites. I think of Vuzix as the Hobbit in this case. Don't worry, there were few giants in Tolkein mythology so I'm not going to go far here!

When, earlier this week the news broke (see Guardian's article and the Slate.com piece) that Microsoft has been granted a patent on its own development project (no doubt with the help and support of Nokia) resembling those of Apple and Google, I shrieked with delight!

A third giant entering the ring has two impacts for all end users, for all the content in existence already and to come, and for our AR Standards Community activity.

First, and most directly, it puts content publishers, the likes of Goliaths like CondeNast (as well as the micro-publishers) in a position of having to support–in their future AR-ready content catalogs–multiple display platforms which is (I hope) prohibitively expensive. The content publishers will be able to unite and encourage the display providers to open at least some interfaces to common code and over time maybe even have full interoperability. In an ideal scenario, the owners of content around the world, beyond the three giants themselves, will simply ignore the three competing platforms until a set of simple tags and functionality are agreed upon and implemented across them.

Second, the work our community began in summer of 2012 on the requirements Hands-free AR devices will have the benefit of more open minds, people who are working on their own hardware and software that want to be heard and express their vision of an open world while the war of the Titans is raging. The parallel, I believe, is that today's innovators and entrepreneurs who want to develop totally new information experiences in the real world, unlike anything we've had or seen before and for the benefit mankind, are like the Olympians of Greek mythology. Perhaps, by having the three Giants agree on some level, if not completely, about open Augmented Reality, the doors for the future of AR for the "rest of us" will open.

And, there another reason for mentioning the Greek Mythology and my hope the myth of the Giants is not entirely re-enacted in our days. In my modern version, the three giants are allowed to come out and play nice with Cronus. If your Greek Mythology is as rusty as mine, I will share with you that during the War of the Titans, the Giants helped the Olympians overthrow the Titans, of whom Cronus was king. In modern day, Khronos Group is a strong supporter of open AR through open, royalty-free specifications of interfaces for hardware in most of our mobile devices.

Categories
Augmented Reality Research & Development

Physical World as an Interface

Although there are a growing number of excellent examples and even reports of positive return on investment on mobile AR, I shudder at the thought that the first applications the term "Augmented Reality" will bring to the minds of most people will be a game played with the wrapping on a candy bar. OK. I get it. The power of engagement.

AR experiences that fail to bring value to the user (beyond a quick thrill) in return for their attention are unhealthy for our image. People fail, or are not paid enough, to think sufficiently about the impact this technology will have and how to use it.

In my opinion, one profound impact of AR will be to turn the user's immediate environment into the interface for search and interactivity with digital information. Time for a new term: turning the physical world into the interface for digital is an extension of Skeuomorphism.

According to the Wikipedia definition, a skeuomorph is a physical ornament or design on an object copied from a form of the object when made from another material or by other techniques. It's a principle that Apple, while under the direction of Steve Jobs, was known for. The debate over the merits of Apple's extensive use of skeuomorphism became the subject of substantial media attention in October 2012, a year after Jobs' death, largely as the result of the reported firing of Scott Forstall, described as "the most vocal and high-ranking proponent of the visual design style favored by Mr. Jobs".

There are already examples of AR permitting the physical world to become the interface for the digital one. One I'm hoping will be repeated in many public spaces is the interactive lobby. If you are not already aware of this Interactive Spaces project, developed earlier this year for an Experience Center on the Mountain View Google campus, I highly recommend getting acquainted with the goals and people behind it on this blog post.

In this example, the cameras in the ceiling detect the user's presence and moving around in the space causes objects to move, sounds to be produced and more.

Expect many more examples in 2013.

Categories
Augmented Reality Events

InsideAR 2012

InsideAR, conducted earlier this week in Munich, was an outstandingly well-balanced event. It was also a sizable “AR industry insiders” gathering produced without a professional event organization. We were nearly 500 participants for two jamb-packed days. Thanks to metaio for producing this exceptional human experience!

What made it different and sufficiently exceptional for me to be thinking about it for days, even to the point of inspiring me to dedicate this post to the event? First, the people. There were people from every continent and segment of the ecosystem. For example, I had the pleasure of being introduced to Ennovva, an experience design and AR development company from Bogota, Columbia. There was a smattering of American companies that don’t frequently make it to European AR industry events: Second Site, Vuzix, and Autodesk. Of course, the European community of AR developers was well represented, and there were many loyal metaio customers who have been using AR with highly quantifiable results, such as Lego, Volkswagen and IKEA. Smaller companies were also in good standing. There were Asian partners and customers in attendance. There were newbies seeking to be introduced to AR as well as founders of the industry, such as Daniel Wagner and Ron Azuma.

metaio's announcements were also important and impressive. Junaio is coming along nicely but so are Creator and metaio Engineer.

Representing the technologies for AR, many of metaio's large partners were there—ARM, NVIDIA, ST Ericsson among them. And, notably, the hosts even welcomed their competitors. I chatted with representatives from Qualcomm, Wikitude, MOB Labs, however, didn’t see any Layar folks in attendance (Martin Adam, of mCRUMBS, was showing Layar-based experiences).

The presentations were (with only one exception) outstanding. Each day there were several sessions featuring metaio products. Watch the keynote here. On stage, the balance between live demonstrations and slideware was admirable, making the new product announcements compelling and strategies easy to understand. Clearly, engineering at metaio has been very busy over the past year, but so have those who operate the company’s communications systems. The company even launched a new industry magazine!

Audience attention was still high before lunch on Tuesday when I shared the community's vision for open and interoperable AR and how this group of dedicated people is working together to approach the diverse challenges. See slides here and video of the Open AR talk here. I expect to see some of the new faces who came up to me after the talk at future community meetings.

In the exhibition space, metaio and its partners showcased AR through many fantastic demonstrations, permitting visitors to touch and use AR in specific use cases and domains, such as automotive, games and packaging. The Augmented City, one of my favorite domains for AR to bring value to citizens and managers of urban settlements, was highly featured in sessions and in the demonstration area.

I thoroughly enjoyed watching, speaking and catching up with the whole metaio team—from the co-founders to the very newest employees (wave to Anton Fedosov and congratulation for the smooth landing!). They all moved together like a well-oiled team and event production machine, from the front desk to the staff meeting areas in the loft and made us feel like part of their family. It was also an opportunity to put faces to names I recognized. Irina Gusakova, who was an invaluable resource by e-mail prior to InsideAR2012, made me feel like we were long lost friends.

Finally, it seems trivial to some but in my experience it is important to fuel the body as well as the mind. Beverages were always plentiful and the food was authentic and available when needed. A visit to Munich would not be complete without Octoberfest and metaio saw to it that we finished in style under a tent in the center of the city’s annual festivities. This event is definitely on my calendar for 2013!

Categories
Augmented Reality

Is it Augmented Reality?

Television. From a former life I vaguely remember this broadcast medium that was (and still is for some people) provided on a screen in a defined sequence of segments called "shows" in an order defined by something that was called a "program." The content is professionally produced and sometimes approaches the real world. Then there is this genre of television called "reality TV" but that's something else.

Companies that prepare content for broadcast sometimes mix a video signal from a television camera with a digital data stream in such a way that one (the digital data) overlays the signal from the camera, synchronized in real time so well that the viewer can imagine that the line is "drawn" in perspective in the real world. The most clear case of this is the first down line in America football. A line appears on the television over the video to show the viewer where the ball stopped on its way to the goal. Those who are in the stadium cannot see the line.

A recent article published about Augmented Reality (principally about the use of AR in medical use cases) on the National Science Foundation web site described the experience of seeing first-down line on television as an example of Augmented Reality. Unfortunately, the differences between composing a video in the studio and sending out to millions of viewers over a broadcast communication medium and composing an AR experience in real time on a user's device for viewing from precisely one pose are too numerous to be overlooked.

Here are a number of ways the two differ:

#1 pose: the content that is captured by the television camera is destined to be broadcast to a mass audience. It may be broadcast globally, locally, but it is still a one-to-many signal. In a broadcast technology studio the user (the individual for whom the composed scene is visible), the viewer's pose (context and position with respect to reality) is in no way utilized to create the experience (remember "AR Experiences"). In television there are "viewers" and in AR there are "users."

Test: If the viewer looks 180 degrees from where the composed scene is rendered, no longer viewing the television at all, the scene (first down line overlay on the video signal) is still there. If an AR user looks away from the point of interest, the augmentation no longer appears.

#2 real time: see point #1. Broadcast looks like it's real time, but it's delayed with respect to what the user does.  If you replay the same sequence of frames captured by the television camera, say 5 milliseconds, an hour or a year later, the same overlay will be possible. The AR experience requires that all the elements be exactly the same to reproduce the same AR experience. By definition, every AR experience is unique because we are unable to travel backwards in time to repeat a moment in the past.

#3 reality: What the user sees on the TV screen is digital overlay over digital media. It is composed centrally by software in the studio. Is the television camera operator's point of view the "reality"? Yes, but only for the operator of the camera.

I understand that the mobile device-based AR experiences we have today suffer from the same weakness in the definition of "reality".

My conclusion is that "broadcast AR" is a misnomer. It may be helpful to introduce the concepts of digital overlay but should not be confused with "real" AR. Over time, as more people have AR experiences of their own, we will have less need to use poor analogies to define AR and there may even come a day when we will be able to drop the label "AR" entirely.

Categories
Augmented Reality News

Skin in the Game

Those who read this blog are more than likely among the group that shares some level of conviction (maybe not as strongly as I do!) that Augmented Reality is going to be important. We are a small minority, but April 2012 will represent an important historical milestone. Perhaps it is as important as the June 2009 release of Layar's AR browser in terms of demonstrating that AR has traction, has graduated from the laboratory, captured imaginations and will have huge commercial impacts.

In the past three weeks many more people are finally getting to see the signs that indicate how important Augmented Reality will be in the future. The curtain that formerly prevented people from knowing how much investment was truly happening is dropping. Major companies are now following Qualcomm and putting resources into AR.

They are putting "skin in the game" and that's what it takes to convince many (including the mobile pundits such as Tomi Ahonen and Ajit Jaokar) that AR has passed a treshhold. In case you didn't catch it, Tomi posted on his blog, Communities Dominate Brands, on April 11, 2012 (probably the shortest blog he's ever written!) that he has seen the light and he can now believe that AR is the planet's 8th mass medium.

Twenty-eleven was a slow year for AR revenue growth but those who were paying attention could see small signs of the growing captial influx. Intel had already demonstrated its interest in AR by investing in Layar in late 2010 and in Olaworks in early 2007 and expanded its investments (e..g, in Total Immersion). Texas Instruments, Imagination Technologies, ARM, ST Ericsson and Freescale all revealed that they have established programs on their own or in partnership with AR companies to accelerate AR on mobile platforms.

But, with only a few exceptions, these announcements by semiconductor companies were "me too," forced by the apparent (media) successes of Qualcomm's Vuforia and Hewlett Packard's Aurasma. These last two companies have heavily contributed to the media's awareness of mobile AR, but, sadly, also contributed to the perceived image that AR is a gimmick.

We can now pinpoint the catalytic event when AR is taken more seriously: the 9:00 AM PDT April 4, 2012 announcement by Google that confirmed prolific rumors that it is indeed working on see-through head-mounted displays (posted on TechCrunch here). Many jumped to the conclusion that these are AR glasses, although the applications are not, strictly speaking, limited to AR.  A blog called Glasses from Google calls them "a new paradigm in computing."

While the figures have not been (and never will be) disclosed by Google, I estimate that the company has already invested well over $4M, approximately ten percent of the entire AR industry revenues in 2011, to get to its current prototype (even one that reboots regularly). And (most likely) without reaching far beyond its internal intellectual properties and a few consultants. Note that, in light of the Project Glass announcement, the Google acquisition of Motorola Mobility for total of about $12.5 billion is very strategic. It surpasses the October 2011 $10 bilion Hewlett Packard acquisition of Autonomy by only a few billion dollars. Very big Skin in the Game.

While the global media has for 15 days steadily reported on and speculated about Project Glass, and the video broke social media records, this is not the only example that there have been and are new investments being made in mobile AR in April 2012. Tiny by financial standards, but significant developments nevertheless, include the Olaworks acquisition, and today Total Immersion has announced that  Peter Boutros, a former Walt Disney VP will be its new president. And Google isn't the only company that's working on eyewear for information. For example, Oakley's Apri 17 announcement that it has been working on an eyewear project shouldn't have taken anyone by surprise but managed to make the headlines.

What and who is next?!

And how much skin will the next announcement be worth?

Without strong business cases this second question will be the most difficult to answer and, for this reason, it is a topic to which I have recently written another post.

Categories
Augmented Reality News

Intel’s First Full Acquisition of Korean Firm, Olaworks

The Korea Herald is not where I normally get my news. Nor do I regularly visit The Register (whose tag line is "Biting the Hand that Feeds IT"). But today I visited both in order to learn more about Intel's $30.7M acquisition of Olaworks.

In case you are not familiar with it, Olaworks was one of the early companies to dedicate itself first to computer vision (primarily face recognition) and then to apply its intellectual property to solve Augmented Reality challenges. The founder of Olaworks, Dr. Ryu Jung-hee, has been a long-standing friend and colleague and one of the most outgoing Koreans I've met. Ryu has attended at least four out of the past five AR Standards Community meetings and miraculously shows up at other events (e.g., he accepted my invitation to come to the first Mobile Monday Beijing meeting and showcase on the topic of mobile AR, and presented about Olaworks during the first AR in China meeting, one year ago).

Not only am I pleased for Ryu and the 60 employees who work for Olaworks, I'm also impressed that an analyst concluded that one reason for the acquisition might be Olawork's facial recognition technologies. At present LG Electronics, Pantech, and HTC make use of Olawork’s face recognition technology in their phones. Gartner analyst Ken Dulaney told The Reg that Intel’s decision to acquire was probably informed by the growing popularity of face recognition software in the consumer space. In fact, Texas Instruments recently shared with me that they are very proud of the facial recognition performance they have on the OMAP. Face recognition could be used for a lot of different applications (not just AR) when it is embedded into the SoC, as an un-named source suggested might be Intel's intention since Olaworks seems to be heading for integration with another Intel acquisition, Silicon Hive.

Another analyst speculating on the acquisition, Bryan Ma of IDC, sees the move as one of many steps Intel is taking to "prove it’s better than market leader ARM in the mobile space. It has been trying to position Medfield as a better performance processor using the same power consumption as ARM,” he told The Reg. “In the spirit of this it would make sense for Intel to move for technology and apps which can harness that horsepower to differentiate it from ARM.”

I'm not familiar with the Korean investment landscape but it may be important that the Private Equity Korea article on the acquisition makes a point about Intel's acquisition of Olaworks being the first full Korean acquisition the chip giant has made. It seems that we rarely hear about Korean startups in the West and I suspect that one reason is that the most common exit strategy of a young Korean company is acquisition by one of the global handset manufacturers (LG Electronics, HTC, or Samsung), or one of the large network operators. It's perfectly logical, not only from a cultural point of view but also because the Korean mobile market is large and has a long history of having its own national telecommunications standards.

After NTT-DoCoMo's launch of its 3G service in October 2001, the second 3G network to go commercially live was SK Telecom in South Korea on the CDMA2000 1xEV-DO technology in January 2002 (10 years ago). By May 2002 the second South Korean 3G network was launched by KTF on EV-DO and thus the Koreans were the first to see competition among 3G operators.

I hope that the Olaworks exit signals the opening of Korean technology silos and an opportunity for other regions of the world to benefit from the advances the Koreans have managed to make in their controlled 3G network environment.

Categories
Augmented Reality Research & Development

Project Glass: The Tortoise and The Hare

Remember the Aesop’s fable about the Tortoise and the Hare? 11,002,798 viewers as of 9 AM Central European Time April 10, 2012. Since April 4, 2012 Noon Pacific Time, in five and a half days, over the 2012 Easter holiday weekend, the YouTube “vision video” of Google’s Project Glass has probably set a benchmark in terms of how quickly a short, exciting video depicting a cool idea can spread through modern, Internet-connected society. [update April 12, 2012: here’s an analysis of what the New Media Index found in the social media “storm” around Project Glass.]

The popularity of the video (and the Project Glass Google+ page with 187,000 followers) certainly demonstrates that beyond a few hundred thousand digerati who follow technology trends, there’s a keen interest in alternative ways of displaying digital information. Who are these 11M viewers? Does YouTube have a way to display the geo-location of where the hits originate?

Although the concepts shown in the video aren’t entirely new, the digerati are responding and engaging passionately with the concept of handsfree, wearable computing displays. I’ve seen (visited) no fewer than 50 blog posts on the subject of the Project Glass. Most are simply reporting on the appearance of the concept video and asking if it could be possible. There are those who have invested a little more thought.

Blair MacIntyre was one of the first to jump in with his critical assessment less than a day after the announcement. He fears that success (“winning the race”) to new computing experiences will be compromised by Google going too quickly when slow, methodical work will lead to a more certain outcome. Based on the research in Blair’s lab and those of colleagues around the world, Blair knows that the state-of-the-art on many of the core technologies necessary for this Project Glass vision to be real is too primitive to deliver (reliably in the next year) the concepts shown in the video. He fears that by setting the bar as high as the Project Glass video has, expectations will be set too high and failure to deliver will create a generation of skeptics. The “finish line” for all those who envisage a day when information is contextual and delivered in a more intuitive manner will move further out.

In a similar “not too fast” vein, my favorite post (so far, we are still less than a week into this) is Gene Becker‘s April 6 post (48 hours after announcement) on his The Connected World blog. Gene shares my fascination with the possibility that head-mounted sensors like those proposed for Project Glass would lead to continuous life capture. Continuous life capture has been shown for years (Gordon Bell has spent his entire career exploring it and wrote Total Recall, other technologies are actively being developed in projects such as SenseCam) but we’ve not had all the right components in the right place at the right price. Gene focuses on the potential for participatory media applications. I prefer to focus on the Anticipatory services that could be furnished to users of such devices.

It’s not explicitly mentioned, but Gene points out something I’ve raised and this is my contribution to the discuss about Project Glass with this post: think about the user inputs to control the system. More than my fingertips, more of the human body (e.g., voice, gesture) will be necessary to control a hands-free information capture, display and control system. Gene writes “Glass will need an interaction language. What are the hands-free equivalents of select, click, scroll, drag, pinch, swipe, copy/paste, show/hide and quit? How does the system differentiate between an interface command and a nod, a word, a glance meant for a friend?”

All movement away from keyboards and mice as input and user interface devices will need a new interaction language.

The success of personal computing in some way leveraged a century of experience with the typewriter keyboard to which a mouse and graphical (2D) user interface were late (recent) but fundamental additions. The success of using sensors on the body and in the real world, and the objects and places as interaction (and display) surfaces for the data will rely on our intelligent use of more of our own senses, use of many more metaphors between the physical and digital world, and highly flexible, multi-modal and open platforms.

Is it appropriate for Google to define its own handsfree information interaction language? I understand that the Kinect camera point of view is 180 degree different from that of a head-mounted device, and it is a depth camera, not a simple and small camera on the Project Glass device but what can we reuse and learn from Kinect? Who else should participate? How many failures before we get this one right? How can a community of experts and users be involved in innovating around and contributing to this important element of our future information and communication platforms?

I’m not suggesting that 2012 is the best or the right time to be codifying and to put standards around voice and/or gesture interfaces but rather recommending that when Project Glass comes out with a first product, it should include an open interface permitting developers to explore different strategies for controlling information. Google should offer open APIs for interactions, at least to research labs and qualified developers in the same manner that Microsoft has with Kinect, as soon as possible.

If Google is the hasty hare, as Blair suggests, is Microsoft the “tortoise” in the journey to provide handsfree interaction? What is Apple working on and will it behave like the tortoise?

Regardless the order of entry of the big technology players, there will be many others who notice the attention Project Glass has received. The dialog on a myriad of open issues surrounding the new information delivery paradigm is very valuable. I hope the Project Glass doesn’t release too soon but with virtually all the posts I’ve read closing by asking when the blogger can get their hands on and nose under a pair, the pressure to reach the first metaphorical finish line must be enormous.

Categories
Augmented Reality Business Strategy

Augmented Real(ity) Estate

I would like to live in a world in which the real estate agent [information finder (an "explorer" that uses AR)] and the transaction platform are all (or nearly all) digital.

Funda Real Estate, one of the largest real estate firms in the Netherlands, was (to the best of my knowledge) the first Layar customer (and partner). Initially developed in collaboration with our friend Howard Ogden 3 years ago, the Funda layer in the Layar browser permits people to "see" the properties for sale or rent around them, to get more information and contact an agent to schedule a visit.

A few hours ago, Jacob Mullins a self-proclaimed futurist at Shasta Ventures, shared with the world on TechCrunch how he came to the conclusion that real estate and Augmented Reality go together! Bravo, Jacob! I think the saying is "In real estate there are three things that matter: Location. Location. Location." Unfortunately, none of the companies he cites as having "lighthouse" examples are in the real estate industry.

Despite the lack of proper research in his contribution, property searching with AR is definitely one of the best AR use cases in terms of tangible results for the agent and the user. It's not exclusively an urban AR use case (you could do it in an agricultural area as well) but a property in city-center will certainly have greater visibility on an AR service than one in the country. The problem with doing this in most European countries is that properties are represented privately by the seller's agent and there are thousands of seller agents, few of whom have the time or motivation to provide new technology alternatives (read "opportunity").

In the United States, most properties appear (are "listed") in a nationwide Multiple Listing Service and a buyer's agent does most of the work. Has a company focused and developed an easy to use application on top of one of the AR browsers (or an AR SDK) using the Multiple Listing Service in the US?

My hypothesis is that at about the time the mobile location-based AR platforms were introduced (mid-2099), the US real estate market was on its way or had already imploded. People were looking to sell, but not purchase property. 

This brings up the most important question neither raised or answered in Jacob's opinion piece on TechCrunch: what's the value proposition for the provider of the AR feature? Until there are strong business models that incentivise technology providers to share in the benefits (most likely through transactions) there's not going to be a lot of innovation in this segment.

Are there examples in which the provider of an AR-assisted experience for Real Estate is actually receiving a financial benefit for accelerating a sale or otherwise being part of the sales process? Remember, Jacob, until there are real incentives, there's not likely to be real innovation. Maybe, if there's a really sharp company out there, they will remove the agents entirely from the system.

Looking for property is an experience beginning at a location (remember the three rules of real estate?), the information parts of which are delivered using AR. Help the buyer find the property of their dreams, then help seller component, and YOU are the agents.